What We Talk about When We Talk about Test Automation
The term test automation was introduced relatively recently in the history of software development. If I had to guess who coined and popularized this term, tool vendors would be at the top of the list. The term means everything and nothing at all.
My observation has been that when testers talk about test automation, they mean using an API like Webdriver to mimic what a user might do in a web browser, and then make a few assertions. When developers talk about automation, they are probably talking about unit testing or something at the service layer. And operations people are most likely thinking of Bash scripting, monitoring, and the guts that control continuous integration tooling.
This is pretty normal. Everyone on a technical team is working from a personal bias, but it does make communication challenging.
Does any of this matter?
Not all that much. The problem of the term test automation can easily be solved. When someone says they are working on test automation, simply ask to hear more about that work. Are they working on services, the user interface, continuous integration, or Bash scripts? I worry much more about the practice of test automation.
Testers asking about test automation often seem to be referring to browser automation. This assumption completely ignores the fact that there is a whole world of software that isn’t based on a three-tier—database, server, client—architecture. But more importantly, it ignores context and leads to people experiencing the same mistakes over and over again.
Many UI automation implementations I see have problems knowing when a page is ready, or even locating elements, that lead to tests failing when there is no software bugs. We politely say the tests are flaky, but the reality is that they are poorly designed.
In other cases, the software being tested changes so rapidly that a significant part of the automation effort is forced to deal with maintenance. This is a mismatch in context in expectations. The test team, and probably the development leadership, wants something from the tool that it just can’t offer. Software has to be relatively stable, or tests have to be simple and probably shallow, for UI automation to work. Software in flux makes tests break.
These problems don’t happen because UI automation, or even test automation in the general sense, is bad or difficult to make work. They happen because most testers have a poor understanding of the practice right now. We take people with little to no technical background, stick them in a place where you need to understand software development to be productive, and expect it to just work.
All the different types of test automation are wonderfully useful, and there is much more out there than the user interface. My feeling is that spending time developing these practices and letting that information trickle through the test world will be more fruitful than debating terminology. Developing the practices and making them common knowledge will help us clarify and refine the language.