US Homeland Security cyber chief warns that too much depends on end users

10 Nov 2017

Jeanette Manfra, assistant secretary, office of cybersecurity and communications, US Department of Homeland Security. Image: Therese Molloy

Cyber chief Jeanette Manfra warns that the opt-in nature of cybersecurity is the weakest link.

Jeanette Manfra was in Dublin last week where she addressed the Institute of International and European Affairs about the strategy that the US Department of Homeland Security (DHS) is implementing to address national cybersecurity incidents in the US, to protect critical infrastructure and to ensure the US government’s ability to deliver key services and functions to US citizens.

Manfra discussed the fine balance between privacy and security in the context of escalating terrorism, cyber threats and the requirement to provide for the security of the people.

‘We simply cannot be fully dependent on the end user in the way that we are now for security’

She also highlighted how the US government works closely with private companies to achieve this objective and is applying a ‘secure by design’ enterprise approach to protect US government bodies from cyberattacks and risks.

As well as being assistant secretary in the office of cybersecurity and communications at the DHS, Manfra is also the permanent director for strategy, policy, and plans for the national protection and programmes directorate.

Previously, Manfra served as senior counsellor for cybersecurity to the secretary of DHS, and the director for critical infrastructure cybersecurity for the National Security Council staff at the White House.

Before joining DHS, Manfra served in the US army as a communications specialist and a military intelligence officer.

What was the genesis of your career in terms of joining the army to now spearheading US cyber defence?

I grew up around computers. My father was a programmer in the late 1970s, so we actually had all sorts of bits and pieces of computers lying around, and so I really grew up loving computers.

I started out studying science in college, ended up studying history for a variety of reasons, largely to do with joining the army. While studying science, I discovered I was a terrible programmer and I actually enlisted in the army. I started getting very interested in all of the new network technologies and the instant messaging capabilities.

In the late 1990s, my parents had a foreign exchange student over from Norway and he was using our computer and, at the time, I didn’t know what instant messaging was all about and he was using Internet Relay Chat (IRC) to talk with loved ones back home. “That is amazing, and you are just talking to somebody in Norway right now?” So I got really interested at the time in the commercialisation of the internet.

For a variety of reasons, I decided to take a break from college and enlisted in the US army’s Signal Corps. I needed the money, to be honest. There’s a great way in the US, if you enlist in the reserves, they send you off on training. The Signal Corps was in the early days of communications technologies and it totally sucked me in.

They showed me videos of people running through woods deploying tactical satellites and I said to myself, that’s so awesome. But then you go through training, get to your unit and I was just a LAN administrator. But it was the early 2000s and it was a cool time in technology terms, and the military was just staring to think about net-centric warfare. I just loved the army and ended up finishing my degree and going in full-time. I was a military intelligence officer and, after some time in Germany and Iraq, I left the army and went to grad school and found my way into the Department of Homeland Security to help set up their office of cybersecurity, and I have been there ever since.

The internet has exploded in ways most people never envisaged, but the problem of our time is how porous it is and how vulnerable people and organisations are to things such as spearphishing. How do you coordinate a response in terms of private and public industry as well as the various security agencies?

There are different cases depending on where you learn about something. I think where we’ve been successful in the US, is getting industry to be better at organising itself. I can’t know everybody in the country, I can’t have a one-to-one relationship with everyone. But I can have a one-to-many relationship. And so, we have set up policies over the years that continue in play, where we have a way that the government can organise to respond to attacks. For policy purposes, we are working with industry to do the same, and asked them to represent an industry-wide perspective, and we have developed this further for operational.

Every [industry] sector now has a council such as the energy sector or financial sector, and I have these statutory abilities to protect information and protect conversations. So, we can have policy conversations and say what we are thinking about and say, ‘What does that mean to you?’

They can organise themselves within their industry and do their best to provide unified positions.

As you know, US industry is very, very large and fragmented, with lots of competing interests – but they have done a very good job of being able to have those conversations.

Where we are building on now is on the operational side and so, what they have is information-sharing analysis centres that are industry-specific. If they work within their industry across the country to get industries to join that, I can send information to that entity and, in many cases, there are thousands of members, and it is much more scalable that way.

In instances where something happens like WannaCry or Bad Rabbit, we often learn about it in the press.

US Homeland Security cyber chief warns too much depends on end users

Jeanette Manfra, assistant secretary in the office of cybersecurity and communications, US Department of Homeland Security. Image: US Embassy, Dublin

What we are also learning about is building similar organisations to our own around the world, and a lot of countries have those. They are also telling us what is going on and so we’re getting information from other countries, as in the case of WannaCry. This helps to speed up the response.

If an organisation was under attack, doing regular security practices, and they detect something, that’s also a great scenario. In most cases, one entity coming to law enforcement saying, ‘I’ve been hit by something here,’ we will go and say, ‘Let us hunt on your network to see what is going on.’

We have the legal framework and all these agreements that allow us to do that. We go in there, we find what is going on, and we are able to take that information and turn it into processes we share with the rest of industry. A lot of the time, organisations are seeing the same thing or saw something, were able to block it and say, ‘Here’s the mitigations I have put in place.’ We are starting to get into this cycle. Sometimes it comes from intelligence, and we’d push that out through our alerts. We are increasingly able to share that internationally now with computer security incident response teams around the world.

You employ ethical hackers – to what degree do they have licence to stress test systems?

Firstly, they have to inform a body before they test the system. It would be illegal to do otherwise. Our Computer Fraud and Abuse Act governs on any unauthorised access into a computer system as illegal.

We try to construct the engagement and basically, what we will do is: the entity, whether a federal agency or private sector body, they will give us the parameters of where they would like us to look. We try to make it as realistic as possible, with as fewer restrictions as possible, but also we don’t want to shut down their business in this process, and so we try to be flexible. If they are going to give us restricted access, I would rather have that than nothing.

But we do try to make it as realistic as possible. We employ very realistic techniques. Our guys and girls are really good at what they do, and we have people who can do it on the IT side and the control systems side. The control systems side is really fascinating; there is a real dearth in the workforce there of people who actually understand and can do that. And so, we have a team that has helped industrial control systems quite a bit, and that has tremendous capability. We also have labs where we have full mock-ups of a water system or electrical systems.

In terms of implementing ‘secure by design’ into government bodies around the US, how do you go from entrenched government thinking to swifter enterprise thinking?

I think the way the internet was built, nobody envisioned the way that it would have developed if the protocols and capability weren’t already designed that way. The way it was designed allowed it to spread and manifest in the way it has.

I think that there is not only work to be done with critical systems, there is also work that we do and want to continue to do with the people who own and manage the parts of the internet itself, as well as standards bodies, and start to look at where there are things we can do with the underlying infrastructure and protocols to make those more safe and secure.

And so, we are starting much more upstream instead of plugging holes at the end. I think we have to do both but the increased connectivity and the explosion of this … you can’t think of a perimeter with an attack surface that you can’t understand, you have to think differently if we are going to take advantage of it, which I want us to.

If the tech companies are astonished at how systems can be compromised, from politics and social media to attacks on internet of things devices, what hope is there for the rest of us?

We simply cannot be fully dependent on the end user in the way that we are now for security. The notion is that you have an opt-in and you have to take extra steps to be secure as a user, but if users don’t do it, then the rest of the system is vulnerable.

There is so much to know and steps to take and we are raising public awareness, saying: ‘Please people, patch your systems, please don’t use weak passwords, stop using Windows XP.’

But I think there has to be an equal effort looking at the underlying pieces and looking at more secure coding practices, and raising the bar for that.

It goes back to the Cybersecurity Framework, how am I managing my risk? Well, part of it requires certain practices from your vendors, and that will help drive change.

But I still think there is tremendous space to work – both within the US but also internationally – around standards protocols and the underlying infrastructure, and how we can work together to make that secure as it is, but also potentially design more secure solutions going forward.

Want stories like this and more direct to your inbox? Sign up for Tech Trends, Silicon Republic’s weekly digest of need-to-know tech news.

John Kennedy is a journalist who served as editor of Silicon Republic for 17 years