When implementing new technology, there is often the expectation that the faster new innovations can be released, the better. Today’s modern DevOps teams look for additional ways to automate and streamline the development process, in light of the rising pressure to innovate at a faster pace.
But oftentimes, speed comes at a price. New capabilities are not always bug-free the first time, which means taking the time to step back and make sure that the quality of the finished product is of a standard acceptable to your user base.
So how does a developer balance the pressure for speed with the need to ensure quality? There are multiple tactics and processes that go into ensuring that an application can work seamlessly, or handle a certain spike in usage or traffic pattern. With the help of the performance testing experts at BlazeMeter, we share common best practices for striking an appropriate balance to ensure that speed and quality is not an either/or choice.
Developer Best Practices
Click through for common best practices developers should consider to strike an appropriate balance between speed and quality, as identified by BlazeMeter.
Understand the broader implementation of DevOps.
DevOps is important in both technology and culture. It means that there is an introduction of an agile workflow, which breaks down the silos between individual teams to implement the same technology best practices across teams. By having one DevOps practice across the board, it is much easier to ensure that technologies such as performance testing can be applied whenever and wherever needed. Before DevOps and agile, all performance testing needs were routed through a separate team (and thus an organizational bottleneck). That goes away with autonomous small teams.
Incorporate performance testing through the continuous delivery cycle.
Performance testing and DevOps complement each other and make the overall performance testing process much easier. When performance testing is an integrated part of the continuous-delivery workflow neither speed nor quality are compromised. Automating performance testing in the pipeline is critical, as automated testing ensures consistency in how and when a test is performed. Once the validity of a given test is established, automating it provides a defined benchmark of how the test is run on a regular basis. The steps of the test aren’t subject to interpretation by individual testers, and are conducted with the same timings and inputs throughout. Automated testing is especially important in the DevOps pipeline because once a test is established, it will continuously run – it does not need to be set up again and again, making it ideal when running multiple tests at a time.
The Democratization of Testing
Make performance testing accessible across the organization with the democratization of testing.
Allowing all members of the organization the ability to performance test requires that the process be simplified and streamlined, and it pays great dividends. Once the transition is underway, rather than waiting for someone who is an expert in performance testing to implement, anyone is able to do so. If there is a specific part of an app that needs urgent attention, it is easy to go in and run a test. Offering self-service access to anyone interested in performance testing is another way to ensure democratization of the process.
Ensure the democratization of testing tools.
Developers are often only as good as their access to resources and tools. Therefore, it’s important that testing be compatible with any combination of open source tools already being used by development teams. Open source performance testing is designed to fit into workflows that are already in place, so as to not disrupt the process and making it so that less upfront work is needed to correctly implement. For example, BlazeMeter works out of the box with Jenkins and other continuous integration servers and with nine open source testing tools, including JMeter, Selenium, Gatling, Locust, Tsung and The Grinder.
Run multiple tests at the same time.
Contrary to popular belief, performance tests do not have to be run one test at a time. In fact, it is more efficient to run multiple tests in parallel. This is because the combined runtime of the test takes only as long as the longest single test takes to run. Running tests concurrently can save hours, depending on the size of the tests and the number of tests previously fired sequentially. When code is checked in, or when a build is performed or a deployment is staged, the feedback needs to be as close to instant as possible. The idea of a four or five-hour test cycle every time a build is performed may have made sense in a nightly or weekly build world, but in the world of continuous delivery, “always have a working build” means proving the current build is working minutes or even seconds after it is run, not hours later.