Questions to Ask during Test Selection for Automated Tests
It is impossible to test something completely. No matter what I do, there is always one more test idea I can perform or one more question I can ask.
We use test design techniques such as domain testing and risk-based testing to make this problem more approachable. Each technique helps us to better answer the questions “What do I need to test?” and “What tests should I perform?”
This same problem exists in test automation, except that choosing poorly creates slower builds and unreliable information about product quality. Here are some guidelines I like to use for test selection when I make automated tests.
My current day-to-day work is with a company that does 100 percent pairing. For any code change, there are two developers at a minimum working on it, and usually two developers plus a test specialist. When a group is done with a code change, it is ready to do to production—there are no handoffs. We rely heavily on test automation to make this flow possible.
We work at several layers: unit tests against controllers and views, service tests against an API, and BDD-style tests against a userface. For each line of code, we have to ask some important questions. What do I need to test here, and why? What is the most appropriate place in the testing stack to test this change? Is this test powerful, performant, and useful? And, of course, how many tests is enough?
Let’s say we are adding a button to download a pdf of a financial form. Right off the bat, there are a couple of questions that come to mind: Does the button display when it should? Is the button hidden or disabled when it should not be usable? Below the user interface, I want to know if a file can be downloaded, if the content is correct, and if the name is correct.
These tests are necessarily simple. In general, I want test automation to be very fast so that we get feedback after a change as quickly possible. Each test is specifically designed to answer the question “Are we done yet?”
You are probably thinking that sounds like shallow testing. Yes, it is, and that’s okay. These tests are designed to help us know when we are done, to be a safety net for refactoring, and to alert us in seconds that something has gone wrong with a change. If you have more complicated and nuanced testing to do—and I hope you do—it should happen through exploration at various points in the development flow.
Test selection is just as important during automation as it is during exploration—maybe even more important. A bad test performed in exploration can be done in seconds and we can move on to something more important, but a bad automated test lives on in the build and in our source code repository.
If you’re wondering what to automate, ask whether this test helps drive development forward.