Test Automation Gets No Respect

The world of software automation for quality is riddled with failures. When the people creating the automation of the software under test (SUT) fail to create reliably running tests, or it becomes clear that this effort takes more time than first estimated, management and the rest of the team lose confidence.

In any case, driving the SUT through scenarios is too often seen as a risky, low-value afterthought. After all, the developers can test the product themselves and learn things that they thought the test team was going to tell them anyway but for some reason can’t reliably deliver.

The conventional approach to software automation for quality creates a losing situation for the people doing the work.

If they are told that the highest-value automation is end-to-end automation of the product, including the web page or GUI, they are likely doomed to write a system that creates many false positives—i.e., test failures that have nothing to do with product quality—which in turn create more work for them because they must follow up with a debug session just to discover if there is an actionable piece of information for the rest of the team.

The broader team pays little attention to the results from the checks because they know:

  1. False positives are common, and if there really is a product bug, the authors of the check would discover that in a debug session and tell them.
  2. The checks don’t measure what they’re designed to do, because they can’t possible match the perception and smarts of a human testing the SUT directly.

With the correct focus on verifying and regressing the business requirements of the SUT, rather than on automating the SUT to make it do stuff, the false-positive problem and the what-is-the-check-verifying problem go away. I created the pattern language MetaAutomation to describe how to take the optimal approach to solving these problems while creating many other benefits along the way.

  • The focus is on prioritized business requirements, not manual tests or scenarios
  • Checks run faster and scale better with resources
  • Check steps are detailed and self-documenting, with pass, fail, or blocked status recorded in the results
  • Check artifacts are pure data, to enable robust analysis on results and across check runs
  • The quality measurement results are transparent and available for sophisticated queries and flexible presentations across the whole team
  • Performance data is recorded in line with the check step results

With MetaAutomation, the test and QA role can produce speedy, comprehensive, detailed, and transparent quality information to ensure that functional quality always gets better.

Matt Griscom will be presenting his session MetaAutomation: Five Patterns for Test Automation at STARWEST 2015, from September 27–October 2 in Anaheim, California.

Up Next

About the Author

TechWell Insights To Go

(* Required fields)

Get the latest stories delivered to your inbox every month.