For cybersecurity industry, it looks like AI revolution is here to stay

Despite privacy tradeoff, machine learning becoming vital to solving complex data issues

Share on FacebookShare on Google+Tweet about this on TwitterShare on LinkedInEmail this to someonePrint this page

Arti­fi­cial intel­li­gence and one of its appli­ca­tions, machine learn­ing, have become cyber­se­cu­ri­ty buzz­words. Star­tups as well as estab­lished ven­dors are look­ing to AI for solv­ing the com­plex prob­lems that humans can’t, and many ven­dors are see­ing it as the answer to indus­try chal­lenges.

But through­out its 60-year his­to­ry, AI has had sev­er­al peaks and val­leys. Does it have stay­ing pow­er this time?

Every­one says this time around is dif­fer­ent,” says Anand Rao, inno­va­tion lead for Price­wa­ter­house­C­oop­er’s ana­lyt­ics group. “Com­put­ing has increased at an expo­nen­tial rate and some of the things we couldn’t do even three or four years ago are now fea­si­ble.”

Relat­ed pod­cast: Orga­ni­za­tions use machine learn­ing to fer­ret out data anom­alies

The cur­rent upswing is dif­fer­ent in two oth­er ways, Rao adds. AI devel­op­ments are tak­ing advan­tage of the open-source approach to algo­rithms and the break­throughs in oth­er tech­nolo­gies like big data ana­lyt­ics.

Most embrace AI

In a recent sur­vey of 2,500 con­sumers and busi­ness deci­sion-mak­ers, PwC found that 63 per­cent of con­sumers believed arti­fi­cial intel­li­gence was impor­tant for help­ing solve com­plex prob­lems that “plague mod­ern soci­eties.” Addi­tion­al­ly, 68 per­cent of respon­dents felt it was impor­tant to use AI to help solve cyber­se­cu­ri­ty and pri­va­cy issues.

Rao says that there has been more empha­sis on using AI in the con­sumer world—but the enter­prise world is catch­ing up.

One chal­lenge that I think arti­fi­cial intel­li­gence is going to help solve in enter­prise secu­ri­ty is the fact that the amount of data that a secu­ri­ty ana­lyst is faced with on a day-to-day basis is no longer a prob­lem that can be solved by humans alone,” says Matt Rodgers, head of secu­ri­ty strat­e­gy at E8 Secu­ri­ty.

Like many oth­er ven­dors, E8 Secu­ri­ty is turn­ing to AI for the solu­tion. The com­pa­ny uses machine learn­ing to ana­lyze data from mul­ti­ple sources, includ­ing indi­vid­ual devices and net­work traf­fic, to auto­mate learn­ing and find anom­alies in behav­iors and mali­cious activ­i­ty.

Matt Rodgers, E8 Secu­ri­ty head of secu­ri­ty strat­e­gy

Not only are some of those com­plex­i­ties beyond the human capa­bil­i­ties, machines allow for a con­sis­tent envi­ron­ment, Rodgers says.

The nice thing about the (AI) sys­tem is that it doesn’t have a good day or a bad day,” he say

Built-in bias

One of the risks of using arti­fi­cial intel­li­gence is poten­tial bias, Rao notes. Machines are trained based on spe­cif­ic sets of data and char­ac­ter­is­tics, which may not apply in the next con­text. For exam­ple, hack­ers try­ing to breach a munic­i­pal sys­tem may not have the same moti­va­tions as they would for breach­ing a bank, so their behav­ior would be dif­fer­ent as well.

Anand Rao, inno­va­tion lead for PwC’s ana­lyt­ics group

Peo­ple are look­ing at mul­ti­ple solu­tions to build AI that’s respon­si­ble and has trust and trans­paren­cy built into the sys­tem so it can check for bias­es,” Rao says.

Scott Zol­di, chief ana­lyt­ics offi­cer for FICO, says there’s a lot of excite­ment indi­cat­ing AI is at the height of its cycle, but the race to the mar­ket will lead to some fail­ures.

AI is only as good as its mas­ters that retrieve the data and con­struct the prob­lems,” he says.

FICO has applied arti­fi­cial intel­li­gence to solve prob­lems suc­cess­ful­ly for a long time. Its Fal­con fraud-man­age­ment plat­form has been used in the finan­cial indus­try around the globe for 25 years, accord­ing to Zol­di, who’s worked at FICO for more than 17 years. And the FICO cred­it score is, of course, famil­iar to any­one who’s ever applied for cred­it.

Now, FICO is apply­ing some of the same ideas to cyber­se­cu­ri­ty, to offer both cyber-ana­lyt­ics solu­tion and a FICO-like score mea­sur­ing cyber­se­cu­ri­ty enter­prise readi­ness.

Pow­er­ful ana­lyt­ic abil­i­ty

Using “self-cal­i­brat­ing ana­lyt­ics” and machine learn­ing, the FICO cyber­se­cu­ri­ty plat­form mon­i­tors activ­i­ty across the net­work, in real time, to find anom­alies and detect threats.

Scott Zol­di, FICO chief ana­lyt­ics offi­cer

What makes AI real­ly pow­er­ful is that it learns rela­tion­ships between fea­tures much bet­ter than oth­er ana­lyt­ic tech­niques,” Zol­di says. “It finds all these com­pli­cat­ed rela­tion­ships that prob­a­bly are not read­i­ly appar­ent to most experts.”

AI is not going to replace the experts any time soon. It’s a sym­bi­ot­ic rela­tion­ship, and the machines need the humans as much as the humans need the machines.

For one, Rodgers notes, the machines can’t deci­pher the dif­fer­ence in intent and don’t under­stand the busi­ness goals of an organization—and with­out that, AI can’t make more defin­i­tive deci­sions on its own.

One con­cern about arti­fi­cial intel­li­gence is its reli­a­bil­i­ty on big data and all the pri­va­cy impli­ca­tions that stem from that. One exam­ple is the def­i­n­i­tion of per­son­al­ly iden­ti­fi­able infor­ma­tion: Should dai­ly behav­ioral pat­terns be con­sid­ered PII?

Even if it’s stripped of all the things that we con­sid­er PII today, should those pat­terns that (indi­vid­u­als) make on a day-to-day basis be con­sid­ered per­son­al­ly iden­ti­fi­able infor­ma­tion?” Rodger says. “Ideas like that are going to have to be con­sid­ered.”

More sto­ries relat­ed to arti­fi­cial intel­li­gence and cyber­se­cu­ri­ty:
Machine learn­ing keeps mal­ware from get­ting in through secu­ri­ty cracks
Machine learn­ing com­bined with behav­ioral ana­lyt­ics can make big impact on secu­ri­ty
Auto­mat­ed analy­sis of big data can help pri­or­i­tize secu­ri­ty alerts, neu­tral­ize threats