Metrics have long been used to track projects. In a recent discussion I had with a bunch of QA experts, I was asked this question on 'What is different about metrics now; aren't we tracking the same set of metrics as in the past?' Good question; this becomes the basis of my discussion below.
Yes, QA/test management has traditionally been tracking metrics such as defect ratios, validity, test productivity, pass%, code coverage, etc. A seasoned test manager used these metrics coupled with his experience to gauge a product's quality and readiness to ship. However, the metrics were largely boiler plate and in-ward looking, meaning, the standard set was carried forward release after release and they focused more on the executional aspects rather than the product's business requirements.
In the recent years, this has been changing, where metrics are customized for each release, are dynamic and actionable based on:
The test team is closely collaborating with the rest of the product development team to incorporate the findings from these metrics to further enhance product quality. For example:
So, the entire team is beginning to understand the value of these metrics rather than seeing them as an overhead. Whether you are a QA team of a product company or a QA services vendor, take a closer look to see what metrics you are using as of today and whether they meet the criteria outlined above. Once you have them designed to meet your product's needs, the next thing to move to is 'Are you able to define service-level agreements (SLAs) based on these metrics and also are able to take rewards or penalties for meeting or not meeting these SLAs?' While most companies would have implemented metrics, this is a clear differentiator on whether or not the metrics have been implemented in the true sense. More than the goodness of monetary gain from rewards, for fear of monetary loss from penalties such monetary ties to SLAs have not been implemented at most places. However, this has lately been picking steam as this becomes the ultimate proof of the true value one can get from such metrics. The confidence that the team has in its quality efforts and the quality of the product it is signing off on, is well represented when it is ready to sign up for such monetized SLAs. There are some external dependencies, though, that one must keep in mind and ensure protocols are built around them to set up a successful monetized SLA model. These include:
Once you have the core system in place, which has taken into account some of the points mentioned above, you can gradually move into implementing a monetized SLA model. Start with a few core areas that you are comfortable with and once you are able to see how this model works in those specific areas, you can look at expanding the scope to more ambiguous areas. This will help you bring objectivity into such gray areas as well. In my next post I will discuss specific test effort and test quality metrics that will drive improved product and project quality.