Look to insecure software as the root cause of most major hacks

More high-level discussion, pressure on developers would mean fewer, less disastrous attacks

Share on FacebookShare on Google+Tweet about this on TwitterShare on LinkedInEmail this to someonePrint this page

After the recent WannaCry ransomware attack that crippled systems worldwide, people have started to question the reason for the breach and what to do about it. Endpoint security, improved patch management, better business continuity planning, and a strong incident response capability will all be held up as critical to preventing similar future attacks.

But what was the root cause of the breach?

It may seem that organizations not updating (i.e., patching) their systems in time and leaving them vulnerable to malware like WannaCry is the problem. But why did they need a patch in the first place?

Dig deeper and examine the reasons. The technical details of the patch point to a security flaw in the way “SMBv1 handles specially crafted requests.” In other words, the root cause is a software vulnerability.

Software vulnerabilities are rampant

Insecure software is to blame, as it almost always is in major hacks. Need proof? Take a look at the Metasploit exploit database. Metasploit is one of the most well-known exploit systems in the world, and the database is full of code that takes advantage of software vulnerabilities like “command execution,” “code execution” and “buffer overflow.” These are all technical names of code flaws that allow an attacker to compromise a remote system, including via malware like WannaCry.

Related podcast: Security by design: Embed protection during software development

Despite the fact that software is the basic issue, it is conspicuously absent from the conversations about next steps after an attack. Insecure code, built in-house or by third parties, is not mentioned in the National Institute of Standards and Technology (NIST) cybersecurity framework, gets very little attention in the industry standard ISO 27001, and wasn’t featured as a topic in the recent National Association of Corporate Directors’ Cyber-Risk Oversight document. Most organizations see it as a technical detail that doesn’t need to be addressed at such a high level. The result, however, is that it simply gets left out of mindshare, effort and budgets.

Asking the right question

Perhaps that’s because writing secure code is extremely difficult. Yet harping on the difficulty of secure code doesn’t justify the topic being ignored. Nobody claims that secure software will ever be bulletproof, but the question we ought to ask is, “how easily could this vulnerability have been prevented in code?”

If people aren’t asking, there is no reason to innovate or make the systematic kinds of changes needed to build an infrastructure of secure software. There is room to do much, much better collectively than what we see today.

While the exact coding issue from the Microsoft patch isn’t clear, a common weakness referenced in the Metasploit database is “buffer overflow.” We have known about buffer overflows for years, yet a recent study found that state-of-the-art code security scanners were only able to detect less than half of known buffer overflows in code. Recent research indicates that many organizations, including software vendors, rely on this kind of scanning as their primary, or even sole, method of securing software. Relying on techniques that find less than half of extremely dangerous flaws is industry standard.

Change software development process

We can do much better by asking software vendors to adopt more holistic secure development processes. We can pressure vendors to move away from using unsafe programming languages to power our critical infrastructure. We can encourage research to help build and promote more secure variants and alternatives for those languages. Most importantly, we can make secure software the top-level issue that it ought to be in cybersecurity frameworks, discussions and regulations.

Let’s be clear: We will never be totally safe from hacking or malware. However, if we want to dramatically reduce the frequency and impact of these attacks, the root cause must be addressed.

More stories related to software security:
It’s crucial to mesh security testing into early stages of DevOps projects
A case for making software more hack-resistant from the start
To get ahead of threat curve, boost security during software development