Finding a Middle Ground between Exploratory Testing and Total Automation
Testers seem to be having the same argument over and over again.
The automator wants to get rid of human exploration—that is, they want to press a button, get a green bar, and ship to production. In some cases, they might want to commit to version control, have something else automatically press the button, and automatically ship to production. This is akin to having robots cut down a forest and stack the wood: no humans involved.
The explorer, on the other hand, wants a human intervention step. They see tools more like a chainsaw. The chainsaw allows the human to go ten times as fast, but a human is still in charge, driving the process. The explorer doesn’t want robots to do everything automatically; they want to be a cyborg, a six-million-dollar man, to balance the human and the machine.
When the explorer says of course tools are important, the automator gets angry because that is not what the automator means. The result could be a no-hire, a lost chance to collaborate, or even the end of a friendship.
I believe the two have something to learn from each other.
The implicit assumption of the automator is that the automated tests are all there is. Of course, that is not the case; most web applications automated with Selenium still need to test printing, tab order, font size, and plenty of other features that are hard (if not impossible) to automate. The tools influence the thinking, tempting testers to ignore risks the tool does not support. The best testers still make a list of these other risks and invest some time into exploration.
Also, writing automation takes a long time. There are plenty of risks that are expensive to put into code and unlikely to break if they work now, so testers can get away with one-time tests. The automator’s worldview ignores these problems.
Meanwhile, the explorer is dismissing the value of tools that run all the time, unattended. At best, the explorer might have Selenium running on Jenkins on every commit, or create virtual servers on demand—these are tools that significantly reduce risk and can be done for better or worse. Many explorers dismiss these tools because they don’t understand them, because they think they’re someone else’s job, or because they’re maintained by the programmers.
The explorer can benefit by expanding his idea of risk management, and the same goes for the automator.
I suggest we start the conversation from what we agree on, explain to the other person where we differ, then figure out if there is a way to blend the ideas, like peanut butter and chocolate. The ideal would be to have the code deployed to production on every commit while continuously exploring production and staging for emergent risks, using the logs, customer feedback, new features, version control, and developer interviews to help inform us of those risks.
It’s a tall order, I know. Still, I think a collaboration is better than either of those ideas on their own.