Artificial intelligence (AI) developed at the Massachusetts Institute of Technology (MIT) can already detect 85pc of cyberattacks and is getting smarter every day, according to the institute.
Scientists at MIT’s prestigious Computer Science and Artificial Intelligence Lab (CSAIL) have developed AI that they believe can create a line of defence against the numerous cyberattacks that have crippled government agencies, health insurers and many others.
And they have also claimed that their AI can detect attacks on networks as they happen 85pc of the time.
‘That human-machine interaction creates a beautiful, cascading effect’
– KALYAN VEERAMACHANENI, CSAIL
This is particularly important because the industry standard for threat detection is typically 100 days.
AI2, short for Artificial Intelligence Squared, looks at data to detect suspicious activity.
It does so by clustering the data into meaningful patterns and then presents its findings to human analysts who identify which events are actual attacks. AI2 then takes that feedback on board to inform its next investigation.
And the more data it analyses, the more accurate it becomes.
The researchers, led by CSAIL research scientists Kalyan Veeramachaneni, have said that the system is roughly three times better than previous benchmarks and reduces the number of false positives by a factor of five.
They said, in a paper presented at the IEEE International Conference on Big Data Security in New York City last week, that AI2 can scale to billions of log lines per day to protect networks.
An AI virtual analyst
“You can think about the system as a virtual analyst,” said Veeramachaneni, who developed AI2 with Ignacio Arnaldo, a chief data scientist at start-up PatternEx and a former CSAIL postdoctoral researcher.
“It continuously generates new models that it can refine in as little as a few hours, meaning it can improve its detection rates significantly and rapidly.”
According to MIT News, AI2’s secret weapon is that it fuses together three different unsupervised learning methods, and then shows the top events to analysts for them to label. It then builds a supervised model that it can constantly refine through what the team calls a “continuous active learning system”.
On day one of its training, AI2 picks the 200 most abnormal events and gives them to the expert. As it improves over time, it identifies more and more of the events as actual attacks, meaning that in a matter of days the analyst may only be looking at 30 or 40 events a day.
“The more attacks the system detects, the more analyst feedback it receives, which, in turn, improves the accuracy of future predictions,” Veeramachaneni said. “That human-machine interaction creates a beautiful, cascading effect.”
Main MIT image via Shutterstock