The Word “Automation” Has Led Us Astray
If you’ve written automation for software quality, there’s a good chance you did it wrong. Don’t feel bad; we’ve all been doing it wrong. It’s not our fault.
We were led astray by the word automation.
Automation is about automatically driving human-accessible tools to accomplish specific tasks.
When people starting programmatically driving their software system under test (SUT) for quality purposes, overloading the word automation seemed like the best way to describe the new practice, but actually, the word applies poorly to software quality. “Automating” your SUT means making it do stuff, and that’s usually how management measures output. Quality verifications are added as an afterthought and have little to do with the “automation” objective, so they tend to be poorly planned and documented.
The word automation for software testing distracts people from the business value of the activity: measuring quality.
The misunderstanding that automation for software quality is just doing what humans do (i.e., manual testing), but faster and more often, causes business risk. Unless you’re very clear and specific on what is being measured, the quality measure is incomplete, and manual testers must verify it anyway. And it’s important to remember that manual tests are designed by people to be run by people. They do not make ideal automated measurements because they tend to be long, complicated, and burdened with too many or too few verifications.
Automated verifications of services and APIs tend to be more effective, but this isn’t “automation” either, by the definition.
At the core of the paradigm shift is an important dichotomy:
People are very good at
- Finding bugs
- Working around issues
- Perceiving and judging quality
But, they’re poor at
- Quickly and reliably repeating steps many times
- Making accurate measurements
- Keeping track of details
Computers driving the SUT are very good at the skills humans are poor at, but they are poor at the skills humans are good at.
Conventional automation for software quality misses these distinctions, and therefore comes with significant opportunity costs.
To create software that matters and to be effective and efficient at measuring quality, your team must move away from conventional misguided “automation” and toward a more value-oriented paradigm. To describe this value, I created MetaAutomation.
MetaAutomation is a pattern language of five patterns. It’s a guide to measuring and regressing software quality quickly, reliably, and scalably, with a focus on business requirements for the SUT. The “Meta” addresses the big-picture reasons for the effort, the nature of what automated actions can do, and the post-measurement action items and efficient, robust communication where “automation” by itself fails.
MetaAutomation shows how to maximize quality measurement, knowledge, communication and productivity. Imagine self-documenting hierarchical steps automatically tracked and marked with “Pass,” “Fail,” or “Blocked.” Imagine a robust solution to the flaky-test problem.
MetaAutomation, or something very much like it, is an important part of the future of software quality.
Matt Griscom will be presenting his session MetaAutomation: Five Patterns for Test Automation at STARWEST 2015, from September 27–October 2 in Anaheim, California.