One of the biggest debates rocking much of the application development world is how to best handle application testing in a world that is increasingly being dominated by agile software development methodologies.
The pace of application development these days is simply outpacing the ability of testers to keep up. This has led many to argue that testers need to become more integrated with the overall application development process.
But the folks at Coverity say IT organizations should take this integration theory to its most logical conclusion, namely making the developers themselves more responsible for the application testing process.
Andy Chou, chief scientist for Coverity, says that application testing tools have reached a level of automation that allows developers to test their own applications for the most common flaws, including issues associated with security. Given that developers are closest to the code, Chou says the most cost-effective approach to maintaining software quality is to give developers the tools they need to quickly identify potential flaws.
Many would argue that this is roughly the equivalent of asking the fox to guard the hen house. But Chou points out that testers routinely look for the same basic issues, so shipping code back and forth between developers and testers just wastes time when the developer is going to be the one who has to fix the problems anyway. It would be better for that developer, argues Chou, to just run a tool that automates the testing process in a way that allows them to fix the vast majority of issues long before the application ever gets near a quality assurance process.
As the song says, "The Times They Are a-Changin'" when it comes to application development, and those changes are going to require IT organizations to rethink the process they use to manage the way applications are developed from one end to the other.