Are You Ready for Service-Level Metrics in the True Spirit?

Rajini Padmanaban
Service-level agreements (SLAs) and associated metrics can be applied to projects at various levels and for a diverse set of disciplines. In this write-up, I am focusing on what it takes to implement such SLAs in the true spirit for quality assurance (QA) efforts.

Metrics have long been used to track projects. In a recent discussion I had with a bunch of QA experts, I was asked this question on 'What is different about metrics now; aren't we tracking the same set of metrics as in the past?' Good question; this becomes the basis of my discussion below.

Yes, QA/test management has traditionally been tracking metrics such as defect ratios, validity, test productivity, pass%, code coverage, etc. A seasoned test manager used these metrics coupled with his experience to gauge a product's quality and readiness to ship. However, the metrics were largely boiler plate and in-ward looking, meaning, the standard set was carried forward release after release and they focused more on the executional aspects rather than the product's business requirements.

In the recent years, this has been changing, where metrics are customized for each release, are dynamic and actionable based on:

  • Requirements from the business teams (largely driven by end user requirements)
  • Competition in the market
  • Performance aspects and not just functional aspects

The test team is closely collaborating with the rest of the product development team to incorporate the findings from these metrics to further enhance product quality. For example:

  • If the page response time for a competing product in the market is better, causal analysis is being done to fix the issue at hand.
  • Results from code coverage are further analyzed to work with the devs to remove dead code from the code base.

So, the entire team is beginning to understand the value of these metrics rather than seeing them as an overhead. Whether you are a QA team of a product company or a QA services vendor, take a closer look to see what metrics you are using as of today and whether they meet the criteria outlined above. Once you have them designed to meet your product's needs, the next thing to move to is 'Are you able to define service-level agreements (SLAs) based on these metrics and also are able to take rewards or penalties for meeting or not meeting these SLAs?' While most companies would have implemented metrics, this is a clear differentiator on whether or not the metrics have been implemented in the true sense. More than the goodness of monetary gain from rewards, for fear of monetary loss from penalties such monetary ties to SLAs have not been implemented at most places. However, this has lately been picking steam as this becomes the ultimate proof of the true value one can get from such metrics. The confidence that the team has in its quality efforts and the quality of the product it is signing off on, is well represented when it is ready to sign up for such monetized SLAs. There are some external dependencies, though, that one must keep in mind and ensure protocols are built around them to set up a successful monetized SLA model. These include:

  • Timely inputs on customer requirements including functional and performance requirements, from the business/marketing/product management teams
  • Timely and tight communication protocols with the overall product team especially development teams
  • Adequate representation from the product development team in review and feedback on core test artifacts
  • Representation from the test team in making important product and project decisions over the course of the life cycle
  • A pre-determined budget for testing to empower them to take on the required levels of testing, with the required people, technology and tools expertise

Once you have the core system in place, which has taken into account some of the points mentioned above, you can gradually move into implementing a monetized SLA model. Start with a few core areas that you are comfortable with and once you are able to see how this model works in those specific areas, you can look at expanding the scope to more ambiguous areas. This will help you bring objectivity into such gray areas as well. In my next post I will discuss specific test effort and test quality metrics that will drive improved product and project quality.

Add Comment      Leave a comment on this blog post

Post a comment





(Maximum characters: 1200). You have 1200 characters left.



Subscribe to our Newsletters

Sign up now and get the best business technology insights direct to your inbox.