Removing the Performance Testing Bottleneck

When my company started our journey toward continuous testing, our first hurdle was functional testing. We were focused on real-time test automation and ensuring that when code was checked in there were automated tests that validated the functionality. As we matured, we realized that while this was a huge accomplishment, it meant nothing in terms of speed because our performance tests where holding up our production deployments. We needed a new way to think about performance testing.

Around this same time, we were looking at application monitoring tools for production that would provide application performance insights. In doing our proof of concept, we realized that these tools would be a great addition to our non-production testing. We could use them for faster troubleshooting of performance issues.

What we didn’t realize was that these tools are also great at identifying application degradation under any load. Thus, if we instrumented our non-production environments with an APM tool, we could create a baseline of performance and then, during our testing cycles, if the response times started to get worse, we could automatically flag the issues and stop further deployment. That way the team could immediately address the issues.

Using APM tools in non-prod environments provides real-time, end-to-end analytics that teams can use to immediately remediate performance issues, helping reduce the issues found in longer-running performance tests.

While this insight was good, it was not enough; we still needed a way to increase the load because there were some memory leak and other issues that required higher-volume transactions. We also needed a way for feature teams to have accountability for performance. We are practicing DevOps, and having a separate team for performance testing does not align to those principles. This is when we started our shift to open source performance testing tools.

Open source performance testing tools allowed us to create smaller, component-level tests teams could run at all phases of development. Along with service virtualization, which helps eliminate constraints on downstream systems or third parties, we could test each component in isolation. This allowed developers to run tests locally as part of unit testing, and as they checked in code and deployments started, the pipeline could run additional tests as a way to ensure critical transactions performed to their baselines. Because these tests leveraged similar coding practices as our functional tests, feature teams maintained the test, thus removing the constraint on separate resources for performance testing.

Making the feature teams accountable for all aspects of a feature, including performance, allowed our performance testers to expand their roles into performance engineers. Now they are engaged throughout the process to help architect more efficient ecosystems and build the next suite of performance-related tooling.

By leveraging application monitoring, service virtualization, and open source performance testing tools, we can release features to production without waiting for long-running performance tests to complete. When we do have to run end-to-end performance tests, they are no longer in the critical path for a deployment. And we now have an environment where performance testing is accounted for through every cycle of development.

Adam Auerbach is presenting the session The Journey to Continuous Testing at STARWEST 2016, October 2–7 in Anaheim, California.

Up Next

About the Author

TechWell Insights To Go

(* Required fields)

Get the latest stories delivered to your inbox every month.