The Test Automation Design Paradox
Testing and automation have various paradoxes that are interesting to look at for insight into the challenges and limitations of our profession.
One example is Beizer’s Pesticide Paradox: "Every method you use to prevent or find bugs leaves a residue of subtler bugs against which those methods are ineffectual." Automated tests in particular will have this problem since they usually follow the same path again and again. It is possible to make them work differently every time they're executed (for example by randomizing some of the input data), but it is not easy. Fortunately, software bugs are unlike their counterparts found in nature in one respect. If bugs in nature are immune to a pesticide, they will multiply and become a pest again; however, software bugs won’t. Making sure tests work at several levels and from different angles can help keep the amount of missed bugs manageable.
Another paradox is the need to test the tests. This can be seen as a contemporary version of the old Latin notion "quis custodiet ipsos custodies," which translates to "who watches the watchmen." It is good to distinguish between problems in the test logic—such as a wrong outcome expectation—and in the automation. Tests are essentially tested themselves whenever we use deliberate redundancy between the tests and the application to find mismatches. Automation, on the other hand, does warrant an additional testing effort. For example, when you use keywords or otherwise well-defined functions, consider creating some practice tests runs before you execute them in the regular tests.
Automated tests need to be architected well to make their automation successful, but this can be difficult when the tester does not have the necessary engineering perspective. This is what I call the “test automation design paradox.”
The way tests are written have a major impact on their automation. For example, if testers describe tests as long sequences of detailed steps, it is very hard for an automation engineer to create readable and maintainable automation out of those tests. In that sense, automation success is not as much a technical challenge as it is a test design challenge.
So, if testers are not engineers, but an engineering perspective on tests is important for their automation success, how do you proceed? I see two ways: teamwork or a method that allows non-technical testers to be able to create well-maintainable tests.
As with so many areas in engineering, factoring in teamwork is a no-brainer. In particular, when engineers and testers work together, results can be good.
Automation engineers, or even regular developers, can help the testers write the tests to suit efficient automation is by writing them to avoid repetitive sequences of steps.
For the method, I advocate Action Based Testing. It is based on "test modules" that contain tests written as keyword-driven "actions." In using this method, there are two main steps: 1 - Identify test modules, thus creating a framework into which the tests will fit. Each test module should have a clear and differentiated scope. 2 - For each test module, follow the scope of that module with the actions and the checks, for which a set of best practices has been developed to guide that process.
Following this "frame first, then develop" method gives testers a good structure to work in and can lead to manageable and maintainable automated tests without the need to make engineers out of testers. Instead, it brings about cooperation in teams, helping them achieve great automation results together.