I look at quality assurance (QA)-centric SLAs from three standpoints: Product Quality Metrics, Cost of Quality Metrics and Test Effort/Productivity Metrics. On the face of it, some metrics may seem ambiguous as to which bucket they belong to; nevertheless, all three sets are equally important to run an objective QA organization that is able to take ownership of the quality of the product it is testing.
A few things to note, in reading this blog:
These SLAs have been written specifically for a QA services effort, where a product company leverages a QA team's services. These are a good set of SLAs to implement with the QA services team.
Not all of these SLAs can be implemented as is. Some of them can be largely used in most situations, which I've called out as "by default," whereas, a few others will need to be custom-made for each project depending on the product, its lifecycle, product quality and performance in past releases, etc. Especially the extent to which the SLAs are implemented (and associated measurement values) are the ones that will have to be analyzed and implemented on a case-by-case basis.
Product Quality Metrics:
a) No P1, S1 bugs will be reported by anyone, for first 10-15 days of product release (by default metric).
b) No open performance and security bugs at the time of release (by default metric).
a) Code coverage of x% will be achieved when test has signed off (the % value here will be customized for each project).
No hot fixes will be necessitated by any missed functional bug, in the 1st month of product release (by default metric).
Test will provide comparative analysis/recommendations of the product under test with other competing products especially on performance areas such as page load time, response time and provide suggestions to the product development team (will be evaluated and customized for each project).
Cost of Quality Metrics:
Defect Validity %s- at least 80 percent of reported bugs will be valid bugs (as to what is the definition of valid bugs will be defined in discussion with the client of each project; this % value will also be adjusted depending on the complexities of the project and finalized with the client).
All deadlines agreed upon will be met or beat (by default metric).
a) Adherence to the originally agreed upon project timelines, budget and resources-a minimal deviation of ~5% may be built in based on the project ambiguities that are known upfront (by default metric).
Defect injection timelines-report bugs within a week of when they got injected into the product (by default metric).
Test Automation Maintenance costs will be less than 10% of the automation development costs, in subsequent releases (by default metric).
Test Quality/Productivity/Management Metrics:
Mandatory use of Test Optimization Plans (by default metric)
Bugs assigned to test will be regressed within 24 working business hours (by default metric)
X number of test cases will be designed per day per tester (discuss value of X with client at start of project)
X number of test cases will be executed per day per tester (discuss value of X with client at start of project)
X number of test cases will be automated per day per tester (discuss value of X with client at start of project)
Test plan and test cases will be of high quality not needing more than 2 review iterations (by default metric
Reporting / project communication protocols (discuss and agree with client at start of project and then stick to the agreed upon plan)
Team ramp up/ ramp down time (discuss and agree with client at start of project and then stick to the agreed upon plan)
Test Automation approach and implementation will be robust enough that not more than 5 percent defects reported through automation will be test issues (by default metric)
Bugs found by test team vs. bugs found by rest of the product team (discuss what this allowed deviation is with client at start of project)
Test infrastructure setup time not more than 1-3 days, from the time request comes in. This variance of 1-3 is given to accommodate complex setup needs, that some projects may require (by default metric)
All project artifacts will be documented in a clear configuration management system and will be available on request (by default metric)
Client satisfaction surveys will be conducted every 6 months (by default metric)
Client satisfaction rating for the project will be at least 80 percent (by default metric)
When a services company uses a combination of these metrics with its value-added services, and is willing to take on rewards and penalties for its performance based on these SLAs, it goes a long way in differentiating itself in the marketplace amongst the myriad of QA service providers that exist.