What to Do When Bugs Are Found—Based on When They Are Found
Action Based Testing (ABT) is based on the importance of test design to drive automation success. It uses uses a modular keyword-driven approach, which means that tests are organized in "test modules" and built of sequences of "actions"—each consisting of an action name (keyword) and zero or more arguments. In our TestArchitect tool we define these in a spreadsheet-like format that is easy to work with. Test modules can contain multiple test cases that need to fit into the scope of that particular module. The test cases can form a narrative in which each test case can set up the preconditions for the next one.
The development and automation of test modules fits well in sprints. Typically a sprint will start with higher-level test modules that are at a similar level as the user stories and acceptance criteria. Once the sprint starts creating the detailed UI, the lower level "interaction test" modules can be created as well.
When executing test modules, an interesting question to ask is “What needs to happen with issues that are found?” I like to make a distinction between issues found during a sprint and issues found after the team has declared the functionality under test "done".
For issues found while a sprint is still ongoing, consider skipping a heavyweight bug tracking process. Share any failing test module—as is—with the rest of the team, regardless of whether it only has one failure or multiple failures. If the scope of the test module is well defined, the focus can be on that, such as "the log in process has issues". A developer may still be working on the code and can take a look at what the test module is revealing, and he will be pleased that this module came to him without delay. When you're in an acceptance test driven development process, something like this may already be routine. Do not enter bugs in a bug tracking system or ALM, and avoid the overhead and delays of reproducing, bug-crawling, prioritizing, assigning, etc.
For significant issues that are found after a sprint has closed, I prefer to follow a more formalized process, which is also visible outside the team. The further down the road a problem is found, the more important tracking bugs becomes.
For problems that come up after a sprint, I ask three questions:
- Is it a bug? A percentage of problems reported are caused by defects, but other are because of other factors, such as a wrong understanding by users. Such lack of clarity may also need attention but not necessarily by developers.
- What is the root cause? Make sure the real problem is addressed, not just a quick fix of a symptom.
- Why didn't we find it? This is not to assign blame to testers, but bugs getting unnoticed can point to weaknesses in the tests.
Make sure to address the three questions in the given order. Too often I see people ask "Why did the testers not find this?" This can lead to discord, because the team hasn’t yet determined if the problem is actually a bug in the software. For our own product, we have defined fields for the three questions in the ALM system that we use. This is to encourage teams to answer them and to have these answers readily available to learn from them.
When question two has been answered, I typically like to know for question three whether the defect is based on a mistake made by a developer when implementing an otherwise well-defined functionality (I call these "coding bugs") or it showed up due to an unexpected situation (I call these "jungle bugs"). Unfortunately these are more common.
It is in catching jungle bugs early that I think testers can really shine. Coding problems are often caught in unit tests already, but only when a system enters the real world (the "jungle"), will its true resilience show.