Continuous Exploratory Testing: Expanding Critical Testing across the Delivery Cycle

Arrow expanding testing

Continuous testing is the process of executing automated tests to obtain rapid feedback on the business risks associated with a software release. Where does that leave exploratory testing? It’s not automated, but it’s certainly critical for determining whether a release candidate has an acceptable level of risk.

Test automation is perfect for repeatedly checking whether incremental application changes break your existing functionality. However, where test automation falls short is at helping you determine if new functionality truly meets expectations. Does it address the business needs behind the user story? Does it do so in a way that’s easy to use, resource-efficient, reliable, and consistent with the rest of your application?

Exploratory testing promotes the creative, critical testing required to answer these questions. Obviously, it doesn’t make sense to repeat the same exploratory tests continuously across and beyond a sprint, but exploratory testing can be a continuous part of each delivery cycle. 

Here are a few ways teams embed exploratory testing throughout their process.

Perform ad hoc exploratory testing as each user story is implemented

This is the exploratory testing equivalent of peer code review. When a developer completes a user story, they sit down with a tester. First, the tester starts testing while providing a running commentary on what they are doing and why. Next, the developer takes control, explaining how they would test the software given their knowledge of the implementation details and challenges. The developer gains a user- and business-focused perspective of the functionality, and the tester learns about the inherent technical risks.

Another tactic is to have the developer and a tester separately test the same feature simultaneously, then discuss their findings at the end of the session. Often, this turns testing into a competition, where each participant tries to uncover the most or “best” issues in the allotted time.

Align exploratory testing sessions with full regression testing

It’s simply not possible to perform exploratory testing or full regression testing on every code commit. That’s what smoke testing is for. Instead, many teams run full regression testing and session-based exploratory testing in parallel a few times per week, whenever they’ve implemented new functionality that an end-user could feasibly exercise.

For optimal results, these sessions should be lightly planned, tightly timeboxed, include diverse perspectives, and really take the six thinking hats theory seriously.

Host blitz exploratory sessions for critical functionality

The best way to uncover user experience issues before end-users is to get a broad array of feedback prior to release. One way is to host “blitz” exploratory testing sessions. When you’re wrapping up work on critical new functionality, invite people from a variety of backgrounds and teams to participate in a short timeboxed session. Incentives can help drive participation, maximize results, and make testing fun.

Using test automation to continuously check the integrity of existing functionality is certainly critical. However, if you’re not also making exploratory testing a continuous part of your process, how will you know if the new functionality meets expectations? 

The goal of continuous testing is to understand whether a release candidate has an acceptable level of risk. Exploratory testing is perfectly suited for helping you answer that critical question.

Up Next

About the Author

TechWell Insights To Go

(* Required fields)

Get the latest stories delivered to your inbox every month.