What is normal? Organizations use machine learning to ferret out data anomalies
Over time, technology can automatically raise red flag about suspicious activity
By Byron Acohido, ThirdCertainty
Machine learning has been a staple of our consumer-driven economy for some time now.
When you buy something on Amazon or watch something on Netflix or even pick up groceries at your local supermarket, the data generated by that transaction is invariably collected, stored, analyzed and acted upon.
Machines, no surprise, are perfectly suited to digesting mountains of data, observing our patterns of consumption, and creating profiles of our behaviors that help companies better market their goods and services to us.
Yet it’s only been in the past few years that machine learning, aka data mining, aka artificial intelligence, has been brought to bear on helping companies defend their business networks.
I spoke with Shehzad Merchant, chief technology officer at Gigamon, at the RSA 2017 cybersecurity conference. Gigamon is a Silicon Valley-based supplier of network visibility and traffic monitoring technology. A few takeaways:
Machines vs. humans. There is so much data flowing into business networks that figuring out what’s legit vs. malicious is a daunting task. This trend is unfolding even as the volume of breach attempts remain on a steadily rising curve. It turns out that cyber criminals, too, are using machine learning to boost their attacks. Think about everything arriving in the inboxes of an organization with 500 or 5,000 employees, add in all data depositories and all the business application depositories, plus all support services; that’s where attackers are probing and stealing.
Understanding legitimate behaviors. To catch up on the defensive side, companies can turn to machine learning, as well. Machines are suited to assembling detailed profiles of how employees, partners and third-party vendors normally access and use data on a daily basis. It’s not much different than how Amazon, Google and Facebook profile consumers’ online behaviors for commercial purposes. “You have to apply machine learning technologies because there is so much data to assimilate,” Merchant says.
Identifying suspicious behaviors. The flip side is that machines can be assigned to do the first-level triaging—seeking out abnormal behaviors. Given the volume of data handling that goes on in a normal workday, no team of humans, much less an individual security analyst, is physically capable of keeping pace. But machines can learn over time how to automatically flag events like a massive file transfer taking place at an unusual time of day and being executed by a party that normally has nothing to do with such transfers. The machine can raise a red flag—and the security analyst can be dispatched to follow up.
“We’ve got to level the playing field … today, it’s machine versus humans,” Merchant says. “Organizations have to throw technologies, like machine learning into the mix, to be able to surface these threats and anomalies, so that we take out the bottlenecks.”
For a deeper dive into understanding how machine learning is being brought to bear, please listen to the accompanying podcast.
More stories related to machine learning:
Automated analysis of big data can help prioritize security alerts, neutralize threats
Machine learning picks up where traditional threat detection ends
Virtual analysts leverage human knowledge to help solve cybersecurity challenges