Developing a QA-centric Approach to SLAs

Rajini Padmanaban
In my previous post, I talked about what service-level agreements (SLAs) are all about, what it takes to implement SLAs in a product development environment (especially agile), core points to keep in mind especially when SLAs are tied with rewards and penalties. I had promised to talk about specific QA SLAs in my next blog, so here goes.

I look at quality assurance (QA)-centric SLAs from three standpoints: Product Quality Metrics, Cost of Quality Metrics and Test Effort/Productivity Metrics. On the face of it, some metrics may seem ambiguous as to which bucket they belong to; nevertheless, all three sets are equally important to run an objective QA organization that is able to take ownership of the quality of the product it is testing.

A few things to note, in reading this blog:

These SLAs have been written specifically for a QA services effort, where a product company leverages a QA team's services. These are a good set of SLAs to implement with the QA services team.

Not all of these SLAs can be implemented as is. Some of them can be largely used in most situations, which I've called out as "by default," whereas, a few others will need to be custom-made for each project depending on the product, its lifecycle, product quality and performance in past releases, etc. Especially the extent to which the SLAs are implemented (and associated measurement values) are the ones that will have to be analyzed and implemented on a case-by-case basis.

Product Quality Metrics:

Defect Metrics:

a) No P1, S1 bugs will be reported by anyone, for first 10-15 days of product release (by default metric).

b) No open performance and security bugs at the time of release (by default metric).

Requirements traceability

a) Code coverage of x% will be achieved when test has signed off (the % value here will be customized for each project).

No hot fixes will be necessitated by any missed functional bug, in the 1st month of product release (by default metric).

Test will provide comparative analysis/recommendations of the product under test with other competing products especially on performance areas such as page load time, response time and provide suggestions to the product development team (will be evaluated and customized for each project).

Cost of Quality Metrics:

Defect Validity %s- at least 80 percent of reported bugs will be valid bugs (as to what is the definition of valid bugs will be defined in discussion with the client of each project; this % value will also be adjusted depending on the complexities of the project and finalized with the client).

All deadlines agreed upon will be met or beat (by default metric).

a) Adherence to the originally agreed upon project timelines, budget and resources-a minimal deviation of ~5% may be built in based on the project ambiguities that are known upfront (by default metric).

Defect injection timelines-report bugs within a week of when they got injected into the product (by default metric).

Test Automation Maintenance costs will be less than 10% of the automation development costs, in subsequent releases (by default metric).

Test Quality/Productivity/Management Metrics:

Mandatory use of Test Optimization Plans (by default metric)

Bugs assigned to test will be regressed within 24 working business hours (by default metric)

X number of test cases will be designed per day per tester (discuss value of X with client at start of project)

X number of test cases will be executed per day per tester (discuss value of X with client at start of project)

X number of test cases will be automated per day per tester (discuss value of X with client at start of project)

Test plan and test cases will be of high quality not needing more than 2 review iterations (by default metric

Reporting / project communication protocols (discuss and agree with client at start of project and then stick to the agreed upon plan)

Team ramp up/ ramp down time (discuss and agree with client at start of project and then stick to the agreed upon plan)

Test Automation approach and implementation will be robust enough that not more than 5 percent defects reported through automation will be test issues (by default metric)

Bugs found by test team vs. bugs found by rest of the product team (discuss what this allowed deviation is with client at start of project)

Test infrastructure setup time not more than 1-3 days, from the time request comes in. This variance of 1-3 is given to accommodate complex setup needs, that some projects may require (by default metric)

All project artifacts will be documented in a clear configuration management system and will be available on request (by default metric)

Client satisfaction surveys will be conducted every 6 months (by default metric)

Client satisfaction rating for the project will be at least 80 percent (by default metric)

When a services company uses a combination of these metrics with its value-added services, and is willing to take on rewards and penalties for its performance based on these SLAs, it goes a long way in differentiating itself in the marketplace amongst the myriad of QA service providers that exist.

Add Comment      Leave a comment on this blog post
Sep 14, 2011 7:09 PM Trevor Cutrer Trevor Cutrer  says:
How much would it cost us for you to compose an SLA for an IT base Public private partnership with/for the State of California...Boiler plate just wont cut it Reply
Sep 15, 2011 4:09 AM Shishank Gupta Shishank Gupta  says:
�What can�t be measured cannot be improved", goes the saying. Hence, defining a set of metrics / SLAs is definitely one of the most appropriate activity to be done, early on, in any engagement. I completely agree that the applicability of many of the above mentioned SLAs may be dependent on the nature of the engagement. For example, the measure about requirements traceability as a function of code coverage may be relevant in a unit test scenario. However, in the scenario which is being described, where a product company is leveraging the services of a QA team, the most appropriate measure of requirements traceability may be requirements coverage measured through tools like Requirements Traceability Matrix. Most Test Management tools today provide this as a standard feature. Regarding linking SLAs to IT outcomes like defects, on time delivery of software etc., what may be interesting is to explore the options of integrating SLAs to actual business outcomes. For example, negative business impact of resident defects in production, positive impact of higher system availability or better user experiences achieved through successful load and usability testing, etc. If one can define a risk-reward framework around this, it will help bring greater alignment to the actions performed by the QA teams to the business outcomes delivered as an end result. This would definitely help the QA organization build a substantially solid business case regarding the value it delivers to business outcomes. For more information on Infosys� work in the testing space, visit Reply
Sep 23, 2011 6:38 PM Rajini P Rajini P  says: in response to Trevor Cutrer
Trevor - Thanks for your comment. I'd be happy to talk to you further to answer your question. Can you please send me your email address? I can be reached at Reply
Sep 23, 2011 6:40 PM Rajini P Rajini P  says: in response to Shishank Gupta
Agreed Shishank. Thanks for taking the time to read my post and leavingn a comment. Reply
Feb 11, 2016 1:36 AM Amit Nanda Amit Nanda  says:
Any ideas how some of the smarter QA organizations are measuring SLA's for test effectiveness, QA cost reduction etc in Agile environment. I am looking for out of box ideas other than DRE, CoQ and Test cases productivity etc. Reply

Post a comment





(Maximum characters: 1200). You have 1200 characters left.



Subscribe to our Newsletters

Sign up now and get the best business technology insights direct to your inbox.