A group of MIT researchers has sketched out a way to address a gap in cybersecurity that exists between human and machine. Human-made rules, which are meant to alert the system of an attack, don’t work unless an attack exactly matches one of those rules. Machine-learning measures typically rely on anomaly detection. Consequently, false alarms aren’t uncommon and the system starts to distrust itself.

Combine these two forces – man and machine – and that’s when magic can happen, according to a group of researchers out of MIT’s Computer Science and Artificial Intelligence Lab (CSAIL). In a paper they published, they suggest a new method to detect cyberattacks using artificial intelligence with constant input from security experts. According their testing, this process can pinpoint 85 percent of cyberattacks and cut down the number of false positives by a factor of five.

They call it AI2.

“You can think about the system as a virtual analyst,” CSAIL research scientist Kalyan Veeramachaneni said in a statement. “It continuously generates new models that it can refine in as little as a few hours, meaning it can improve its detection rates significantly and rapidly.”

Veeramachaneni had developed AI2 with Ignacio Arnaldo, who had been a CSAIL postdoc but has since moved on to a role as chief data scientist at PatternEx, a Silicon Valley information security AI firm in Silicon Valley. How does their cybersecurity process work? With three different machine-learning measures, the system filters out events and shows analysts the top ones, so they can investigate further. It then develops a model that allows it to keep on refining itself, creating a “continuous active learning system.”

“The more attacks the system detects, the more analyst feedback it receives, which, in turn, improves the accuracy of future predictions,” Veeramachaneni stated. “That human-machine interaction creates a beautiful, cascading effect.”

Its impact on the world of cybersecurity could be significant, if it lives up to the accuracy it has shown in the lab.

“This paper brings together the strengths of analyst intuition and machine learning, and ultimately drives down both false positives and false negatives,” Nitesh Chawla, professor of Computer Science at the University of Notre Dame, said in a statement. “This research has the potential to become a line of defense against attacks such as fraud, service abuse and account takeover, which are major challenges faced by consumer-facing systems.”

Image via MIT CSAIL.