There’s no getting around the fact that software is a crucial part of business today – whether it be for the usual suspects, e.g., high-tech companies or for more traditional, “brick and mortar” companies, such as those focused on automotive and manufacturing, which are increasingly reliant on software for their day-to-day operations. More lines of code are being written and put into production today than at any previous point, and this trend is only expected to rise. So it’s no surprise that software quality is more important for businesses than it has ever been.
Coverity, a development testing company, recently announced the results of its 2012 Coverity Scan Open Source Report. The report, issued annually, analyzed more than 450 million lines of software code. It measured the quality of a variety of software projects (both open source and proprietary) and identified critical bugs and defects that would otherwise adversely affect software if put into production. Read on for more information about the key findings from this year’s report.
About the project: Founded in 2006 in partnership with the U.S. Department of Homeland Security, the free Coverity Scan™ service is an important resource for the open source development community to improve the quality of its software. In 2012, the service scanned more than 115 open source software projects, and more than 20,000 defects identified by the scan service were fixed by open source developers.
Click through for results from a software quality report from Coverity.
Code quality for open source software continues to mirror code quality for proprietary software: For the second consecutive year, code quality for both open source and proprietary software code was better than the generally accepted industry standard defect density for good quality software of 1.0. defect density (defects per 1,000 lines of code, a commonly used measurement for software quality). Open source software averaged a defect density of .69, while proprietary code (a random sampling of code developed by enterprise users) averaged .68.
Size matters: Open source projects that had between 500,000 and 1,000,000 lines of code had an average defect density of .44, while proprietary projects within this range had an average of .98. On the other hand, proprietary projects with more than one million lines of code had a defect density of .66, while defect density for open source projects of this size increased to .75. This discrepancy can be attributed to the different dynamics of open source and proprietary development teams, and the point where larger organizations generally implement formalized development testing processes.
Linux code quality consistently scores high: Since the original Coverity Scan report in 2008, Linux has achieved a defect density well below 1.0. For the past two years (2011 and 2012), Linux defect density has scored below .7 (6.8 million lines of Linux code scanned in 2011 scored .62 and 7.3 million lines scanned in 2012 scored .66). Most recently, Coverity has scanned 7.6 million lines of Linux code in Linux 3.8 and found a defect density of .59.
High-risk software defects remain a problem: 36 percent of the defects identified and fixed by the scan service in 2012 were “high-risk” defects, or defects that could pose a considerable threat to software quality and security if left undetected. The most common of these defects were resource leaks, memory corruption and illegal memory access – all of which are difficult to detect without automated code analysis.
Open source software – the highlights: Of the open source projects scanned for the 2012 report, several had impressive defect density scores. These include AMANDA (defect density of 0.00), ffmpeg (defect density of 0.10), Mesa, (defect density of 0.48), NTP (defect density of 0.14) and XBMC (defect density of 0.16).