Scalability of Tests—A Matrix
Software testing has been around just about as long as software itself. Automation of testing has been around almost as long, and classic batch-oriented mainframe system automation was in fact quite easy to do.
We all know that timely testing can find bugs before they bite, which saves time and money. Repeating tests when systems change or have to work in different environments or configurations gets to be boring, in particular because the tests won't find many new bugs. However, it is a useful exercise to make sure the system still works as expected. This is an obvious way to go.
There are roughly three kinds of tests that teams use to verify functionality:
- unit tests that typically test individual functions or methods of classes, components, or services
- functional tests using prepared test cases that address the system, usually from a black box user perspective
- exploratory tests, using an interactive learning approach to explore a system, and thus find issues
This matrix summarizes how the different test kinds tend to relate:
What you can see is that unit tests are the king of scalability. Exploratory testing is on the other end of the spectrum. It takes an outside-in perspective, working with many parts of the system at once. The primary purpose is not automation or repeatability, even though one could use exploratory tests to successively have more automation, such as by recording them.
Functional testing is what many automation tools and strategies traditionally focus on. It typically addresses many parts of an application as they come together in the UI and the interaction. And just like with exploratory testing, it does so from a user perspective. However, scalability can be an issue because of the dependency on a UI and the sheer size functional test sets often reach over time.
I have been studying the scalability challenges of large testing projects for many years, and I have written about it many times. It is also the topic of my tutorial at STARWEST. What I've found is that the big size of functional testing can become more manageable by:
- test design, in particular a modular approach, as I have described in the Action Based Testing method
- a number of good practices, such as "don't use fixed waiting times to solve timing issues”
- and maybe, most importantly, good cooperation between all involved—agile projects being a great environment for this
Another strategy is to better organize the applications under test themselves, in self-contained components, tiers, and services. Much of the testing can then focus on those, thus becoming more maintainable, like the unit tests tend to be.
Such a strategy of splendid isolation in system design is maximized the most with a microservices architecture, supporting highly-specialized small processes. These might even run in their own containers, which are isolated user spaces within an operating system, functioning as a lightweight form of virtualization. However, as pointed out in an article by Robert Annett last year, the complexity will still exist. And as we all know—with great complexity comes great testing.
Hans is leading two tutorials at STARWEST 2015.