Heartbleed and the OAuth and OpenID vulnerabilities have created a lot of questions about open source security. And Zack Whittaker writes in this article in ZDNet that these vulnerabilities aren’t out of the norm:
Many millions of Java-based and other open source applications are vulnerable to flaws that have been around for, in some cases, years. And even up to today, they are being downloaded.
But in an InformationWeek commentary, Michele Chubirka writes that open source isn’t any worse than commercial or closed source software:
At least in the security realm, problems don’t discriminate between the commercial and open-source realms — neither are exempt from embarrassing vulnerabilities. One only has to make a cursory examination of the latest US-CERT notifications to debunk that myth. There are plenty of commercial products that make appearances alongside open-source, even with their bug bounties and impressive security budgets. Profit-driven or not, humans write software and they’re prone to error.
It seems like there are a lot of mixed signals about open source security, and I had some questions I wanted answered. So I talked to Ronnie Flathers, associate security consultant for Neohapsis. One of the questions I asked was about this perception that open source is more secure than closed source. Why is that? The reason he gave was pretty clear cut: In open source, anyone who questions the coding can go in and check it:
Because of this, open source tools are often times more scrutinized to security testing than closed source. A company that develops a closed source application is responsible for making sure it is secure. They, and only they, have the source code and the ability to make changes to the application. Developers are generally more concerned with making sure an application works and is stable, and security is often an afterthought. When a security vulnerability is discovered in a closed source tool, a disclosure must be made (either publicly or privately) and the organization behind the application is responsible for creating the fix.
Vulnerabilities, Flathers told me, are discovered in open source software in a few different ways. The first, and probably most common, is by people in the community themselves volunteering their time to assess the software. Secondly, researchers also spend time assessing open source projects. A lot of vulnerabilities (like the recent Heartbleed bug) have been found in the course of security research. These professionals are usually conducting other research and are unaffiliated with the developers of the project. Lastly, large organizations will often sponsor assessments of open source projects. For example, before implementing a large open source project, an organization’s internal security team will audit the application to make sure it complies with the company’s standards. A large number of vulnerabilities are discovered this way, and due to the open nature of the open source community, it benefits everyone for the company to release its own findings and help in the creation of a fix.
Something I’ve always wondered about open source – and I’m sure I’m not the only one who has questioned it: If open source is open to anyone’s input, how do we know there aren’t bad guys purposely adding malicious code? Flathers said that while technically it is possible, evil intent is hard to hide in open source:
If software is closed-source, you are really just taking the developers word that the compiled binary contains nothing evil. It’s easier to disguise and obfuscate malicious code once it’s compiled, and it takes trained security experts to find it. On the other hand, when software is released open-source, the community at large has the chance to view all the pieces that go into making the tool, and can spot if someone slips in a function or routine that is suspicious. Large open source projects have “official” releases that they host on their site or on their official GitHub (or equivalent) pages. These releases are amalgamations of many individuals’ contributions and they have to be approved by the developer. If a bad guy were to add malicious code to a large open source project, it would be spotted by other users or flagged by the main developer and never make it to an official release. Of course, this relies on inherent trust for the developers and maintainers of open source projects. But since the open source community so easily shares information, if any developer or maintainer were found to be conducting malicious activities, their credibility would be ruined.
Flathers added that the Heartbleed bug exposed both the good and the bad of open source. It showed the weaknesses of when an open source application gets too big and unregulated. But he added:
There is good that came from it. Since the disclosure of Heartbleed, people in the security community have stepped up to do a more thorough review of OpenSSL and other large, vital open source projects. In addition, large companies like Facebook, Microsoft and Google have pledged support (in the form of dollars and man-hours) to the Linux foundation to help out underfunded open source projects.
In the end, Heartbleed was devastating to many, but it also showed how the open source community worked together to make sure the bug was fixed quickly. In the end, it may make open source even more secure.