Analyzing the Value of a Test Tool Approach
Over the past ten years, questions I get from test managers have subtly shifted from “How can we automate our process?” to “How do we assess the value of our test tool approach?” Hidden within that second question is a fear that the effort lacks the return senior management was hoping for.
There is a way to analyze the value of a test tool approach that does not require writing code—only the ability to read it a little.
Go into the requirements system and pull up the last five complex, scary pieces of work. Take those requirement IDs to wherever the work is happening and ask to see the checks that cover those requirements. This will be easier if your team is doing Cucumber and has English-like tests for each story; just ask to see those tests in version control and the step definitions.
Now look at the code. How complex is it? How much work did it take to implement those step definitions? If the tests are in version control, ask to go through the history of the changes in those steps, which will tell you if the code is hard to maintain.
Next, evaluate whether these automated checks find any bugs. Just look at the past fifty defects and ask if they were found by automated tooling or by a human. Of the ones found by tooling, try to figure out if they mattered—and if they would have been found easily by manual methods. (You don’t need a tool to find out a login is broken, but it might help you find the defect faster.)
Sometimes the tools don’t help with finding defects later; instead, they help with facilitating a conversation about what to build. That’s a fair point, but if so, why create the automation? The team could simply have the conversation, document the test ideas in some way, and stop. This can have advantages, as tools tend to force a structure and way of thinking about the work. Dropping the tool, even as a thought exercise, allows us to reimagine the work.
For example, a single given-when-then scenario might take ten lines of text, when the same idea can be captured in a single row of a spreadsheet, where the features, inputs, and expected results are columns. Even the spreadsheet tends to constrain our thinking, as tab order, enter-should-submit, usability, loss of connectivity, and multiple-user-edit do not fit easily into rows and columns.
Some teams capture the ideas on a whiteboard or in a Google document. You might think it makes more sense to hit the API than the GUI, especially if that is where the defects tend to come from.
My goal isn’t to recommend a tool or an approach, but instead to equip you to figure out if what your team is doing is working, what you might be able to safely drop, and what might be worth picking up.
If you aren’t doing tooling at all, that’s OK too. Examining what a tool might do for you, what kind of bugs it could find, and where to inject it is a good place to start.