The views expressed by contributors are their own and not the view of The Hill

Equifax breach shows why companies need to act against known vulnerabilities


One of the key “claims to shame” of the Equifax breach was that the vulnerability the attackers exploited was long known, published four months before Equifax discovered the breach. Many claimed this lag shows severe incompetence in the organization’s security, some even claiming it was negligent. What you may not know, however, is that the vulnerability in question was actually introduced nearly five years before the breach, in October 2012. It was simply discovered much later, and disclosed in March 2017.

Why wasn’t this lag discussed more? If Equifax is deemed incompetent for overlooking a vulnerability for four months, wouldn’t they be deemed utterly irresponsible for staying oblivious to it for years? Should we really care about when the vulnerability was disclosed?

{mosads}The answer to that is a resounding yes. A known vulnerability is dramatically more important and urgent to fix than the ones lurking in the code.

 

Software, like humans, is imperfect, and is bound to have bugs — flaws in its intended behavior — and some of those bugs are security bugs, also known as vulnerabilities. Organizations consume a massive amount of software, both open and commercial, and it’s not realistic to expect them to audit and uncover all of those bugs, especially as the software they consume keeps evolving. And so, companies operate software while (hopefully) fully aware vulnerabilities will be lurking inside. Security controls such as firewalls and antiviruses try to prevent attackers from breaking into these unknown security holes.

Fairly often someone, be it the author, researcher or a consumer of the software, discovers such a vulnerability and makes it publicly known (hopefully in a responsible manner), creating what is often called a new known vulnerability. In practice, these are new disclosures of old vulnerabilities, hiding in software that is already deployed in production, waiting to be exploited.

A vulnerability disclosure is a pivotal moment in the vulnerability’s life. It makes defenders aware of the problem, and offers them a chance to fix it. However, it also informs attackers it exists, giving them a chance to find and exploit this flaw. In other words, a vulnerability disclosure triggers a race — will defenders identify and patch the vulnerable systems before attackers can use the new info to break in?

For attackers, vulnerability disclosures are an amazing asset. Uncovering a vulnerability in a component, even an open source component where the code is publicly available, is hard and time consuming. Such research work is also hard to predict, as you may find a vulnerability within an hour, or spend months without any results. A vulnerability disclosure spares them all that effort, pointing them straight to the flaw — just write an exploit, and let the breaches begin!

As a result, attackers are increasingly improving their handling of known vulnerabilities. The severe ShellShock vulnerability existed on most Linux systems since 1989 (!!!), but wasn’t exploited in any notable fashion until disclosed in 2014, at which points botnets and malware were relying on it within days. The remote command execution vulnerability that compromised Equifax, which — as mentioned — existed since late 2012, first saw exploits in the wild the same week it was disclosed. Attackers are waiting for these disclosures, and setup to adapt their automated attack tools to quickly and efficiently leverage them.

Defenders, on the other hand, are not as well prepared. A recent survey shows only 42 percent track the libraries they use known vulnerabilities, and data show libraries that are known to be vulnerable are still downloads millions of times a month. The combination of fast acting attackers and slow-moving defenders can be fatal, and is what led Equifax’s downfall.

The problem is especially severe when dealing with open source libraries. Open source components are developed transparently, and so every vulnerability discovered and fixed in them is implicitly exposed for attackers to find. In addition, organizations consuming open source do so “at their own risk”, as there is no vendor to notify them about flaws, or a signed contract that lets them shed the responsibility. The responsibility for keeping these components secure sits entirely with the consumer.

It’s important to remember that despite this contest, vulnerability disclosures are your friend, not foe. Hiding security flaws, also known as “security through obscurity”, only serves to perpetuate the risk. Attackers will still learn about some of those vulnerabilities, either through research or by purchasing them on the dark web, but defenders will remain uninformed for far longer. In the world of open source, the community uncovers 75 percent of these security flaws — without these reports, open source consumers will be four times more vulnerable! Disclosures help companies learn about the gaps in their defense, and address them — they just need to do it more expediently.

If we don’t want to see a recurring sequence of Equifax-like breaches, we need defenders to wake up. Companies must acknowledge this risk, actively monitor for known vulnerabilities in the software they use, and be setup to quickly fix. An attacker only needs to win the odd race, while a defender needs to win each and every one to survive — so make sure you always have your engines running.

Guy Podjarny is the CEO of Snyk, a company dedicated to finding and fixing vulnerabilities in open source code. 

Tags Computer security Computing cybersecurity Equifax Guy Podjarny Hacking Software bugs

Copyright 2023 Nexstar Media Inc. All rights reserved. This material may not be published, broadcast, rewritten, or redistributed.