Virtual analysts leverage human knowledge to help solve cybersecurity challenges

In face of too few workers and too much data, artificial intelligence automates human decision-making process to respond to threats

Share on FacebookShare on Google+Tweet about this on TwitterShare on LinkedInEmail this to someonePrint this page

Machine learning—an approach to artificial intelligence—has become a buzzword in the cyberscurity industry. Many vendors are turning to machine learning for processing big data and creating threat intelligence.

Ed note_DarkLight Cyber_newThe challenge with machine learning, according to DarkLight Cyber chief technology officer Ryan Hohimer, is that it relies on statistical models and algorithms based on past data to identify “bad behavior.”

Related story: Vectra Networks’ new approach to machine learning

“Bad behavior, for all intents and purposes, is statistically insignificant,” he says. “When somebody does something bad, there isn’t enough data—we hope—to go back and train a system.”

On the other hand, humans, security analysts, in particular, know what bad behavior looks like and use deductive reasoning to identify deviations from the baseline. So DarkLight Cyber is using a reasoning engine to harness that human knowledge and automate threat-response tasks.

Solving a two-pronged problem

The startup, part of the holding company Champion Technology Co. Inc., is trying to solve two problems with its next-generation analytics and automation platform. One is the shortage of cybersecurity analysts—by creating, in a sense, virtual analysts.

The other problem is the unmanageable number of false positives that analysts receive daily from sources that include security appliances and security information and event management (SIEM) tools.

“We take the raw feeds from the appliances, cross-correlate them with contextual information and eliminate the false positives,” says CEO John Shearer, who co-founded the company with Hohimer. “It gives [the analysts] an automated, intelligent way to reduce the false positives.”

Ryan Hohimer, DarkLight Cyber co-founder and CTO
Ryan Hohimer, DarkLight Cyber co-founder and CTO

Hohimer and his team developed the reasoning engine for modeling normal and abnormal behavior, with the focus on “persons of interest,” while working at the U.S. Department of Energy’s Pacific Northwest National Laboratory in Richland, Washington. Shearer, an entrepreneur-in-residence at Pepperdine University, was looking for disruptive innovations that were invented at national labs and could be commercialized.

Working with a group of Pepperdine business students and alumni, Shearer’s team identified about 20 major uses for the underlying technology.

“We decided to take the one that’s the most important, has the highest commercial value, and is of national interest, and we chose cybersecurity,” he says.

The holding company, Champion Technology Co. Inc., was created in 2014 and collaborated with PNNL and nonprofit IP monetization organization Early X Foundation to transfer the intellectual property. Because it was a collaborative process, negotiating the license took less than a month, Shearer says.

John Shearer, DarkLight Cyber co-founder and CEO
John Shearer, DarkLight Cyber co-founder and CEO

“Then we had to take the core system from a raw product and get it ready for commercial deployment,” he says.

Funded by about $2 million from seed investors who were involved with the technology-transfer process, DarkLight Cyber will be looking for VC funding as it scales. With only a few months in the commercial market, the company’s current efforts are to find customers for proof-of-concept.

Besides looking for reference accounts, Shearer says the focus is on finding thought leaders who have evolved to understand that “adding more sensors and data feeds into their environments isn’t going to solve their problem.”

Parsing complex technology

Operating in an extremely competitive space, DarkLight Cyber found its main challenge in trying to explain its technology.

“We’re so unique in our approach to solving this problem that making the world aware of our differentiators—being artificial-intelligence-centric but not (using) machine learning—is a significant challenge,” Hohimer says.

It doesn’t help that the technology is also quite complex.

The reasoning engine operates with so-called programmable reasoning objects, or PROs—a series of software modules. The PROs examine the raw data, passing the information among themselves. This chain—called a belief propagation network in computer science—then escalates the information to agents higher in the chain. These higher-level agents use the deductive reasoning that the human analysts have trained them to use in order to identify threats.

To try and explain it, Hohimer uses the analogy of a group of analysts siting at a conference table trying to make sense of a massive amount of data. Each uses his or her own skills and expertise for the logic of interpreting that data. They collaborate to reach a conclusion.

“What we do is essentially create these virtual analysts that communicate around this conference table and leverage each other’s logic, and that’s how the computer codes are connected,” he says.

“We virtualize the cognitive process of analysts,” he says. “Instead of looking at past data for inductive learning, we use the deductive reasoning of subject matter experts, and we base baseline normal behavior on that knowledge.”

The deductive reasoning inputs can be made either by DarkLight Cyber or the client’s own cybersecurity team. The engine also uses external expertise from sources such as US-CERT (U.S. Computer Emergency Response Team) Insider Threat Center and standards bodies, along with external threat intelligence and internal context and threat intelligence.

“We’re creating an environment to normalize the internal and external sources of information and systems for data fusion,” Shearer says. “To fight this war, you have to have systems that think like humans and can correlate all this information.”

More stories related to machine learning:
Machine learning combined with behavioral analytics can make big impact on security
Machine learning keeps malware from getting in through security cracks
Machine learning helps organizations strengthen security, identify inside threats