What is normal? Organizations use machine learning to ferret out data anomalies

Over time, technology can automatically raise red flag about suspicious activity

 
Share on FacebookShare on Google+Tweet about this on TwitterShare on LinkedInEmail this to someonePrint this page

Machine learn­ing has been a sta­ple of our con­sumer-dri­ven econ­o­my for some time now.

When you buy some­thing on Ama­zon or watch some­thing on Net­flix or even pick up gro­ceries at your local super­mar­ket, the data gen­er­at­ed by that trans­ac­tion is invari­ably col­lect­ed, stored, ana­lyzed and act­ed upon.

Machines, no sur­prise, are per­fect­ly suit­ed to digest­ing moun­tains of data, observ­ing our pat­terns of con­sump­tion, and cre­at­ing pro­files of our behav­iors that help com­pa­nies bet­ter mar­ket their goods and ser­vices to us.

Relat­ed pod­cast: Machine learn­ing keeps mal­ware from get­ting in through secu­ri­ty cracks

Yet it’s only been in the past few years that machine learn­ing, aka data min­ing, aka arti­fi­cial intel­li­gence, has been brought to bear on help­ing com­pa­nies defend their busi­ness networks.

She­hzad Mer­chant, Gig­a­mon chief tech­nol­o­gy officer

I spoke with She­hzad Mer­chant, chief tech­nol­o­gy offi­cer at Gig­a­mon, at the RSA 2017 cyber­se­cu­ri­ty con­fer­ence. Gig­a­mon is a Sil­i­con Val­ley-based sup­pli­er of net­work vis­i­bil­i­ty and traf­fic mon­i­tor­ing tech­nol­o­gy. A few takeaways:

Machines vs. humans. There is so much data flow­ing into busi­ness net­works that fig­ur­ing out what’s legit vs. mali­cious is a daunt­ing task. This trend is unfold­ing even as the vol­ume of breach attempts remain on a steadi­ly ris­ing curve. It turns out that cyber crim­i­nals, too, are using machine learn­ing to boost their attacks. Think about every­thing arriv­ing in the inbox­es of an orga­ni­za­tion with 500 or 5,000 employ­ees, add in all data depos­i­to­ries and all the busi­ness appli­ca­tion depos­i­to­ries, plus all sup­port ser­vices; that’s where attack­ers are prob­ing and stealing.

Under­stand­ing legit­i­mate behav­iors. To catch up on the defen­sive side, com­pa­nies can turn to machine learn­ing, as well. Machines are suit­ed to assem­bling detailed pro­files of how employ­ees, part­ners and third-par­ty ven­dors nor­mal­ly access and use data on a dai­ly basis. It’s not much dif­fer­ent than how Ama­zon, Google and Face­book pro­file con­sumers’ online behav­iors for com­mer­cial pur­pos­es. “You have to apply machine learn­ing tech­nolo­gies because there is so much data to assim­i­late,” Mer­chant says.

Iden­ti­fy­ing sus­pi­cious behav­iors. The flip side is that machines can be assigned to do the first-lev­el triaging—seeking out abnor­mal behav­iors. Giv­en the vol­ume of data han­dling that goes on in a nor­mal work­day, no team of humans, much less an indi­vid­ual secu­ri­ty ana­lyst, is phys­i­cal­ly capa­ble of keep­ing pace. But machines can learn over time how to auto­mat­i­cal­ly flag events like a mas­sive file trans­fer tak­ing place at an unusu­al time of day and being exe­cut­ed by a par­ty that nor­mal­ly has noth­ing to do with such trans­fers. The machine can raise a red flag—and the secu­ri­ty ana­lyst can be dis­patched to fol­low up.

We’ve got to lev­el the play­ing field … today, it’s machine ver­sus humans,” Mer­chant says. “Orga­ni­za­tions have to throw tech­nolo­gies, like machine learn­ing into the mix, to be able to sur­face these threats and anom­alies, so that we take out the bottlenecks.”

For a deep­er dive into under­stand­ing how machine learn­ing is being brought to bear, please lis­ten to the accom­pa­ny­ing podcast.

More sto­ries relat­ed to machine learning:
Auto­mat­ed analy­sis of big data can help pri­or­i­tize secu­ri­ty alerts, neu­tral­ize threats
Machine learn­ing picks up where tra­di­tion­al threat detec­tion ends
Vir­tu­al ana­lysts lever­age human knowl­edge to help solve cyber­se­cu­ri­ty challenges