Is the hype around AI deceiving cybersecurity professionals?

8 Aug 2018464 Views

Share on FacebookTweet about this on TwitterShare on LinkedInShare on Google+Pin on PinterestShare on RedditEmail this to someone

Image: Vintage Tone/Shutterstock

Share on FacebookTweet about this on TwitterShare on LinkedInShare on Google+Pin on PinterestShare on RedditEmail this to someone

New research from Eset shows IT decision-makers think AI is the ‘silver bullet’ needed to address cybersecurity challenges, but marketing hype is causing confusion among teams.

Interest in AI and machine learning (ML) has spiked in recent times. Many people look to the new technologies as a kind of saviour for their long-standing industry problems. This is especially evident in the cybersecurity field, where the hope is that AI could be the ‘silver bullet’ it needs.

Cutting through the AI hype

A global survey from Eset found 75pc of IT decision-makers believe AI will solve their cybersecurity challenges. The research involved surveying 900 IT professionals across the UK, US and Germany.

In the US, it is more likely for IT professionals to consider AI and ML as a panacea to solve all of their cybersecurity issues. 82pc of US respondents said this while in the UK and Germany the figures were much lower at 67pc and 66pc, respectively.

79pc of the total respondents believe that AI and ML could help organisations detect and respond to threats faster. 77pc said AI and ML could be a remedy for the current cybersecurity skills shortage.

Chief technology officer at Eset, Juraj Malcho, said this ‘silver bullet’ attitude is worrying: “If the past decade has taught us anything, it’s that some things do not have an easy solution – especially in cyberspace where the playing field can shift in a matter of minutes.

“In today’s business environment, it would be unwise to rely solely on one technology to build a robust cyber defence.”

Malcho also noted the sizeable gap between US and European survey responses. He warned that the hype machine around AI and ML could be causing European leaders to tune out. “It’s crucial that IT decision-makers recognise that, while ML is without a doubt an important tool in the fight against cybercrime, it must be just one part of an organisation’s overall cybersecurity strategy.”

Confusion reigns

Many decision-makers recognise the importance of AI and ML for future strategy. The majority of respondents have already implemented the latter in their plans. 89pc of German respondents and 78pc of those in the UK say their endpoint product uses ML to protect their organisation.

Alarmingly, only 53pc of respondents said their company completely understands the distinction between AI and ML.

Malcho noted: “The reality of cybersecurity is that true AI does not yet exist, while the hype around novelty of ML is completely misleading – it has been around for a long time.

“As the threat landscape becomes even more complex, we cannot afford to make things more confusing for businesses.”

He called for greater clarity as current hype levels are clouding the message for those making key IT calls.

What is the difference?

AI happens when machines conduct tasks without pre-programming or training. ML relies on training computers and using algorithms to find patterns in large amounts of data. ML then identifies data based on rules and information it already has.

ML has been present in cybersecurity since the 1990s. It is a valuable tool in modern cybersecurity practices, particularly malware scanning.

In cybersecurity it usually denotes a technology built into a company’s protective solution. This technology has been fed correctly labelled clean and malicious samples. ML learns the difference between the good and bad and can analyse and identify most of the potential threats. It also mitigates them as they occur.

There are limitations to ML, though. It still needs human verification at the initial classification stage to reduce false positives.

ML algorithms also have a narrow focus by their nature, while hackers are changing and adapting to break rules. Eset explained: “A creative cybercriminal can introduce scenarios which are completely new for ML and thereby fool the system.

“Machine learning algorithms can be misled in many ways and hackers can exploit this by creating malicious code that ML will classify as a benign object.”

Malcho said a more strategic approach is better. “Multi-layered solutions, combined with talented and skilled people, will be the only way to stay a step ahead of the hackers as the threat landscape continues to evolve,” he concluded.

Ellen Tannam is a writer covering all manner of business and tech subjects

editorial@siliconrepublic.com