Consumer credit reporting agency Equifax recently announced a massive security breach which exposed the data of 143 million US customers.
The stolen data included names, social security numbers, birthdates and home addresses.
In the midst of the fall out, a solitary report is claiming the Atlanta-based credit report giant tried to pass the buck and blame open source software — Apache Struts in particular — for the vulnerability that gave hackers a way in.
Though some may question the reliability of a single report, it stirs up an old, but apparently still relevant debate: how secure is open source software?
Throwing Apache Struts Under The Bus
Equifax’s company statement revealed the breach happened on July 29, claiming that hackers had “between mid-May through July” to exploit a vulnerability on its website to access certain files.
Amidst reports of executives selling off Equifax stock before the breach announcement, the report by equity research firm Baird which posed the open source theory raised a few eyebrows, in part because the source, according to The New York Post, was Equifax itself.
“My understanding is the breach was perpetuated via the Apache Struts flaw,” Baird analyst Jeffrey Meuler told The Post.
Whether Meuler's story is true or not, it reignites a question almost as old as the world wide web itself: is open source software inherently insecure?
Before we dive into that question, Milwaukee-based Hold Security provided a reminder of another source of vulnerability that's older than the web: humans.
Hold Security founder Alex Holden and his 30-man team uncovered a security flaw in an Equifax online portal which gave Equifax employees in Argentina the ability to manage credit report disputes from consumers in that country. The username and password protecting the portal? “Admin/admin.”
The Question: Is Open Source Software Inherently Insecure?
CMSWire posed this question to experts in open source and proprietary software industries, as well as to cybersecurity firms — and asked them if the excuse (if real) had any merit.
The Answers
Michael Han, Liferay
There is no link between open sourced software and security issues. All software has flaws. Humans create software and thus, software is not infallible. Open sourced software means more people will be able to see the source code. This allows security researchers and corporate security personnel to navigate the source code looking for flaws. It also allows users of open sourced software to leverage security tools like Veracode to run independent source code analysis.
Of course, open sourced software also means those with malicious intentions can review the source as well. However, statistics have shown that most security incidents are not due to “zero-day” vulnerabilities. Most incidents occur when malicious hackers exploit unpatched systems. This leads one to surmise that hackers are not really reading source code to find vulnerabilities, but more looking at existing published vulnerabilities to exploit.
We would argue that regardless of open or closed source software, the key to security is an appropriate security management program that includes monitoring available security databases and updating appropriate IT assets.
Ben Bromhead, Instaclustr
Ben Bromhead, CTO at Palo Alto, Calif.-based Instaclustr, a service provider that manages open source technologies. Previously, Bromhead held the position of software development team leader at BAE Systems Australia. Tweet to @BenBromhead
All software has bugs, and all software has security vulnerabilities. It's how the authors and those who run the software respond to new vulnerabilities that matters most in whether security is maintained.
Generally mature open source projects have less bugs (in mature stable versions) and have more "eyes" looking at the source code and running it than proprietary software, so from our experience, things get resolved much quicker. Once software has been fixed, it's up to the company that runs the software to ensure it is up to date and patched correctly.
[As for] Equifax's excuse, it’s pretty lame. You can see the Apache Struts PMC statement on the issue here, but essentially it was likely a zero-day exploit at the time the hack occurred.
From a broader security perspective, having a public facing web server with unrestricted access to a database with PI data is pretty dumb. Any architecture should be structured such that if a public facing server is compromised, the data exposure is minimal.
Frank Yue, Radware
Open source software is built primarily for functionality and security is often an afterthought. This is true even for security protocols. There are many examples where open source software has been vulnerable due to obvious and not-so-obvious programming oversights.
Since there is a community of developers from the general population working on open source projects, it is possible for someone with malicious intent to insert a known backdoor or vulnerability without anyone else knowing of its existence. There is often little to no vetting done for members who participate in an open source project.
Learning Opportunities
Additionally, the development and support of open source code is not consistent. Some open source code is not actively being worked on or updated. Some projects have large and experienced development communities while others have a few programmers at best. Some open source projects are robust and well tested and developed while others are stagnant or little more than pet projects. This variability makes it hard to differentiate secure and valid packages from others that are fraught with bugs and vulnerabilities.
Ben Darnell, CockroachLabs
Ben Darnell, Co-founder and CTO of CockroachLabs, the company behind open source database software CockroachDB. Prior to his work at Cockroach Labs, Darnell was known for his roles at tech companies such as Google Reader, FriendFeed, Facebook, Viewfinder, and Square, Inc.. Tweet to @CockroachDB
Open source doesn't have a perfect security record, but it compares favorably with the rest of the industry. The claim that 'many eyes make all bugs shallow' was overstated (the Struts bug discussed here went unnoticed for years), but transparency in both the code and the development process tends to produce better outcomes. This allows for users to make informed judgements about the trustworthiness of a project instead of relying on a vendor's assertions.
If a single vulnerability (in Struts or any other package) was responsible for a breach of this magnitude then Equifax was being dangerously careless with extremely sensitive data. Equifax is ultimately responsible for the safety of this data. This includes both, being careful about they software they use and employing "defense in depth" so that a single vulnerability can't do too much damage.
Paul Kraus, Eastwind Networks
This response from Equifax is quite concerning — and shows a significant lack of software, let alone security diligence or process. To claim that open source is less secure than closed source, or that closed source is more secure than open source is unfounded, and honestly lazy.
Inferring that Equifax checks their software vendors for vulnerabilities and not the software their teams pull into their products, shows a clear deficiency in Equifax’s software lifecycle process.
Software has bugs — period. It is the responsibility of the user to understand the risks introduced by third party libraries – regardless of where they originate. There have been numerous patches for Struts published, and I would lend a guess that Equifax’s patch management process either was ignorant that the patch applied to their usage, skipped the patch, had the patch in “testing” or had the patch in a queue for future releases. Given any of these scenarios – claiming it was not their responsibility, especially being the stewards of such sensitive data, [will possibly lead to] a much deeper investigation than Equifax’s self-disclosure.
Michael Baker, Mosaic451
Michael Baker is the Managing Director at Phoenix, AZ.-based Mosaic451. Prior to Mosaic451, Baker worked for companies such as JP Morgan and Compucom after starting his career in the software industry back in 1994. Tweet to @Mosaic451
There is no scenario where proprietary software is [inherently] more safe than open source. Security through obscurity does not work. It has never worked.
There is a clear and obvious structural conflict of interest for a privately-held company to acknowledge that its core product (its software) is terrible. Private companies don't acknowledge these things unless they're forced to do so. I give you Microsoft, Oracle, Cisco, the Ford Pinto, cigarettes, etc., ad nauseam. The benefits of exposing code and allowing interested groups and individuals, ranging from States to hacker kids, at your core infrastructure is that bugs are exposed quickly, publicly, and can be resolved quickly. There is a reason that public key cryptography works: the math is public and it's been open to attack for decades.
If Equifax said, 'We've got crypto covered. Just trust us.' Who in their right minds would trust them? Answer: nobody.
Equifax's excuse is obscene, [especially when we consider] the facts of the Equifax case as we know them [like C-levels selling stock prior to the breach announcment], can we say these are the actions of a group who should have proprietary control of their own software?
Sam Saltis, coreDNA
Open source isn’t necessarily less safe, [but] it's hard to take a sides when there are potential process issues on both sides. Did the team at Apache Struts have a proactive program to remove vulnerabilities in their tool and communicated to their community? Did Equifax have a program to proactively apply updates to their platform?
This is where the breakdown occurs, with a commercial platform there is a single obligation on the company to proactively maintain all the platforms that make up the solutions provided to customers. With Open source this obligation is shared, it’s held with the authors of the platform and the recipients of the platform.