How can AI help stop a phishing email scam in its tracks?

30 Jul 2018563 Views

Share on FacebookTweet about this on TwitterShare on LinkedInShare on Google+Pin on PinterestShare on RedditEmail this to someone

Image: Kpatyhka/Shutterstock

Share on FacebookTweet about this on TwitterShare on LinkedInShare on Google+Pin on PinterestShare on RedditEmail this to someone

While many people think of AI in lofty terms, its applications can be demonstrated in something as commonplace as email.

Artificial intelligence (AI) is a powerful tool and its scope is almost too broad to comprehend. While many of us may think of the more high-level applications of AI, such as robot assistants, there are smaller and perhaps more practical uses for the technology.

One such application is email threat detection. Choo Kim-Isgitt is chief product officer at security firm EdgeWave and she spoke to Siliconrepublic.com about the emergence of AI as a major security ally.

Kim-Isgitt explained that while AI is a massively powerful tool, it cannot accomplish everything alone. “Both AI and human threat analysis are needed to detect the most sophisticated emerging threats. AI will never be enough. The industry has not evolved fast enough to rely solely on AI.”

She noted that even with the most advanced AI and robotics development such as projects carried out at Google or Boston Dynamics, “we’ve seen AI create its own logic or lack the level of precision and decision-making to ultimately take over more complex human analysis”.

Human nuance is vital

In the cybersecurity space particularly, you are dealing with highly complex issues, and both human and AI analyses are required. “Human analysis ensures that the most clever, convincing and strategic threats are identified early and prevented. It’s a sentiment shared across the industry.” Kim-Isgitt cited an ESET white paper on machine learning (ML) that found the technology was not mature enough to be reliable as a single defensive layer.

Humans are crucial for several reasons. “We’ve seen many examples of how just relying on AI does not embed the institutional knowledge and nuances needed, and AI alone can come to unproductive or misguided conclusions,” said Kim-Isgitt.

“Additionally, machine learning in its current state has inherent limitations. It is difficult to instil institution knowledge into machine learning; hard to fabricate all possible scenarios, samples, variables of problems.

“Sophisticated adversaries will evolve quickly and bypass detection of AI or machine-learning algorithms.”

In terms of the phishing attacks EdgeWave is presented with most often, the more basic instances are still very common. “We’re not scouring the emails that contain broken links or bad use of grammar any more. The varieties we see range from malware/ransomware emails to more common credential-seeking phishing emails, sending unsuspecting users to malicious websites.

“The sophistication level is growing and the attacks are more targeted and effective, using social engineering, slipping by more automated defences.”

How will ML help catch phishing scams?

The level of sophistication with the ML involved has been developed and curated over the past 10 years, involving layers of automation, workflow and intelligence. These tie in data science and statistics modelling with traditional algorithms, pattern matching, heuristics and rules, said Kim-Isgitt.

While there might be some way to go yet, she is confident of AI’s future role in the world of cybersecurity. “Using AI will enhance computing performance through accelerated processing. This higher level of processing will allow for greatly improved predictive modelling to not only identify attacks before they happen but apply rapid, localised safeguards in networks and systems.

“AI will also provide better insights, intelligence and scenarios for resilience planning, ultimately creating faster results.”

Ellen Tannam is a writer covering all manner of business and tech subjects

editorial@siliconrepublic.com