When developing software, ideally its performance is what matters to the users. But how can IT reliably measure an application’s performance? Each software creation may be vastly different, as are the computer systems used to run the application, which furthers the complexity of setting proper benchmarks and metrics.
To help bring awareness to this problem and to lessen the frustrations associated with software performance measuring, the National Institute of Standards and Technology (NIST) has created an informative document. The paper, “The Ghost in the Machine: Don’t Let It Haunt Your Software Performance Measurement,” is available for download from our site. This download provides details on where performance metrology goes wrong and also gives suggestions on how to set metrics that provide reproducible, reliable results for software creators and users.
According to NIST, many factors can skew performance measurements, but the organization’s goal here is to explain many methods by which to test the software, and allow the testers to find the right selection of metrics to provide necessary results:
The selection of the appropriate metric depends on the goal of the measurement experiment; however, the choices are limited by the services provided by the systems being benchmarked and available tooling (measurement instruments). The measures that are discussed here are not comprehensive. A few commonly used metrics that can be measured with the aid of existing profiling tools are covered. This provides a starting point for the creative metrologist to find or derive the ideal metric to reach his goal.
The sections covered in the NIST document include:
- Performance measures
- Experimental design
- Measurement techniques
- Uncontrollable factors
Most software developers and quality assurance staff will find this document of use during the software creation cycle and into the testing phases of the application. It provides useful steps and suggestions for testing and explains issues that may come up during testing that developers and QA alike will find interesting, such as compiler optimization, x86 hardware latencies, and issues with simple caching and multithreading.