Virtual analysts leverage human knowledge to help solve cybersecurity challenges

In face of too few workers and too much data, artificial intelligence automates human decision-making process to respond to threats

 
Share on FacebookShare on Google+Tweet about this on TwitterShare on LinkedInEmail this to someonePrint this page

Machine learning—an approach to arti­fi­cial intelligence—has become a buzz­word in the cyber­scu­ri­ty indus­try. Many ven­dors are turn­ing to machine learn­ing for pro­cess­ing big data and cre­at­ing threat intel­li­gence.

Ed note_DarkLight Cyber_newThe chal­lenge with machine learn­ing, accord­ing to Dark­Light Cyber chief tech­nol­o­gy offi­cer Ryan Hohimer, is that it relies on sta­tis­ti­cal mod­els and algo­rithms based on past data to iden­ti­fy “bad behav­ior.”

Relat­ed sto­ry: Vec­tra Net­works’ new approach to machine learn­ing

Bad behav­ior, for all intents and pur­pos­es, is sta­tis­ti­cal­ly insignif­i­cant,” he says. “When some­body does some­thing bad, there isn’t enough data—we hope—to go back and train a sys­tem.”

On the oth­er hand, humans, secu­ri­ty ana­lysts, in par­tic­u­lar, know what bad behav­ior looks like and use deduc­tive rea­son­ing to iden­ti­fy devi­a­tions from the base­line. So Dark­Light Cyber is using a rea­son­ing engine to har­ness that human knowl­edge and auto­mate threat-response tasks.

Solv­ing a two-pronged prob­lem

The start­up, part of the hold­ing com­pa­ny Cham­pi­on Tech­nol­o­gy Co. Inc., is try­ing to solve two prob­lems with its next-gen­er­a­tion ana­lyt­ics and automa­tion plat­form. One is the short­age of cyber­se­cu­ri­ty analysts—by cre­at­ing, in a sense, vir­tu­al ana­lysts.

The oth­er prob­lem is the unman­age­able num­ber of false pos­i­tives that ana­lysts receive dai­ly from sources that include secu­ri­ty appli­ances and secu­ri­ty infor­ma­tion and event man­age­ment (SIEM) tools.

We take the raw feeds from the appli­ances, cross-cor­re­late them with con­tex­tu­al infor­ma­tion and elim­i­nate the false pos­i­tives,” says CEO John Shear­er, who co-found­ed the com­pa­ny with Hohimer. “It gives [the ana­lysts] an auto­mat­ed, intel­li­gent way to reduce the false pos­i­tives.”

Ryan Hohimer, DarkLight Cyber co-founder and CTO
Ryan Hohimer, Dark­Light Cyber co-founder and CTO

Hohimer and his team devel­oped the rea­son­ing engine for mod­el­ing nor­mal and abnor­mal behav­ior, with the focus on “per­sons of inter­est,” while work­ing at the U.S. Depart­ment of Energy’s Pacif­ic North­west Nation­al Lab­o­ra­to­ry in Rich­land, Wash­ing­ton. Shear­er, an entre­pre­neur-in-res­i­dence at Pep­per­dine Uni­ver­si­ty, was look­ing for dis­rup­tive inno­va­tions that were invent­ed at nation­al labs and could be com­mer­cial­ized.

Work­ing with a group of Pep­per­dine busi­ness stu­dents and alum­ni, Shearer’s team iden­ti­fied about 20 major uses for the under­ly­ing tech­nol­o­gy.

We decid­ed to take the one that’s the most impor­tant, has the high­est com­mer­cial val­ue, and is of nation­al inter­est, and we chose cyber­se­cu­ri­ty,” he says.

The hold­ing com­pa­ny, Cham­pi­on Tech­nol­o­gy Co. Inc., was cre­at­ed in 2014 and col­lab­o­rat­ed with PNNL and non­prof­it IP mon­e­ti­za­tion orga­ni­za­tion Ear­ly X Foun­da­tion to trans­fer the intel­lec­tu­al prop­er­ty. Because it was a col­lab­o­ra­tive process, nego­ti­at­ing the license took less than a month, Shear­er says.

John Shearer, DarkLight Cyber co-founder and CEO
John Shear­er, Dark­Light Cyber co-founder and CEO

Then we had to take the core sys­tem from a raw prod­uct and get it ready for com­mer­cial deploy­ment,” he says.

Fund­ed by about $2 mil­lion from seed investors who were involved with the tech­nol­o­gy-trans­fer process, Dark­Light Cyber will be look­ing for VC fund­ing as it scales. With only a few months in the com­mer­cial mar­ket, the company’s cur­rent efforts are to find cus­tomers for proof-of-con­cept.

Besides look­ing for ref­er­ence accounts, Shear­er says the focus is on find­ing thought lead­ers who have evolved to under­stand that “adding more sen­sors and data feeds into their envi­ron­ments isn’t going to solve their prob­lem.”

Pars­ing com­plex tech­nol­o­gy

Oper­at­ing in an extreme­ly com­pet­i­tive space, Dark­Light Cyber found its main chal­lenge in try­ing to explain its tech­nol­o­gy.

We’re so unique in our approach to solv­ing this prob­lem that mak­ing the world aware of our differentiators—being arti­fi­cial-intel­li­gence-cen­tric but not (using) machine learning—is a sig­nif­i­cant chal­lenge,” Hohimer says.

It doesn’t help that the tech­nol­o­gy is also quite com­plex.

The rea­son­ing engine oper­ates with so-called pro­gram­ma­ble rea­son­ing objects, or PROs—a series of soft­ware mod­ules. The PROs exam­ine the raw data, pass­ing the infor­ma­tion among them­selves. This chain—called a belief prop­a­ga­tion net­work in com­put­er science—then esca­lates the infor­ma­tion to agents high­er in the chain. These high­er-lev­el agents use the deduc­tive rea­son­ing that the human ana­lysts have trained them to use in order to iden­ti­fy threats.

To try and explain it, Hohimer uses the anal­o­gy of a group of ana­lysts sit­ing at a con­fer­ence table try­ing to make sense of a mas­sive amount of data. Each uses his or her own skills and exper­tise for the log­ic of inter­pret­ing that data. They col­lab­o­rate to reach a con­clu­sion.

What we do is essen­tial­ly cre­ate these vir­tu­al ana­lysts that com­mu­ni­cate around this con­fer­ence table and lever­age each other’s log­ic, and that’s how the com­put­er codes are con­nect­ed,” he says.

We vir­tu­al­ize the cog­ni­tive process of ana­lysts,” he says. “Instead of look­ing at past data for induc­tive learn­ing, we use the deduc­tive rea­son­ing of sub­ject mat­ter experts, and we base base­line nor­mal behav­ior on that knowl­edge.”

The deduc­tive rea­son­ing inputs can be made either by Dark­Light Cyber or the client’s own cyber­se­cu­ri­ty team. The engine also uses exter­nal exper­tise from sources such as US-CERT (U.S. Com­put­er Emer­gency Response Team) Insid­er Threat Cen­ter and stan­dards bod­ies, along with exter­nal threat intel­li­gence and inter­nal con­text and threat intel­li­gence.

We’re cre­at­ing an envi­ron­ment to nor­mal­ize the inter­nal and exter­nal sources of infor­ma­tion and sys­tems for data fusion,” Shear­er says. “To fight this war, you have to have sys­tems that think like humans and can cor­re­late all this infor­ma­tion.”

More sto­ries relat­ed to machine learn­ing:
Machine learn­ing com­bined with behav­ioral ana­lyt­ics can make big impact on secu­ri­ty
Machine learn­ing keeps mal­ware from get­ting in through secu­ri­ty cracks
Machine learn­ing helps orga­ni­za­tions strength­en secu­ri­ty, iden­ti­fy inside threats