A Case for UI Automation in Software Testing

Arrows pointing to automation

When testers talk about automation projects, especially in the user interface, they describe cautionary tales and the million ways these projects can fail expectations. However, there are contexts where automating the user interface works well and, more importantly, helps the development team.

For the past year and a half, I have been working on a project helping a customer develop and run high-volume automation in the user interface. We have a couple hundred tests that run every night, get checked and analyzed every morning, and are developed on every day. Things are working out great.

The best word to describe the product I work on every day is established. This software has been around for a long time and is built on technology most people would consider old: SQL stored procedures, CSS, and some JavaScript. The release cycle is anywhere from three to six months, and there are a few customer branches. The user interface changes regularly to accommodate new features and workflows, but that rarely changes the way existing functionality works. This isn’t the sort of company that worries about keeping up with the latest JavaScript or Ruby library.

The software has high failure demand, or work introduced because something went wrong. Lingering old architecture sometimes means that new changes will introduce problems in ways we can’t anticipate.

The automation project I work on covers a select set of functionality. When the question of a new test comes up, I have a conversation with a couple of developers to talk about how the customers use the product and what specifically needs to be covered. The result is a set of tests that are generally small—each runs in a couple of minutes, with only a few that run end-to-end scenarios. We also regularly “trim the tree” by removing tests that are no longer useful and update some tests to cover new development.

The tests are run every night, right now on three different environments, and the system sends an email with a list of the test names that failed. After I check the email, I begin an investigation based on the logs. I still have tests that fail for reasons other than a bug, or a software change that breaks a test, but these failures are minimal and don’t hide the value of the project. More importantly, the tests find bugs that the customer would care about.

These run on a schedule rather than with continuous integration right now. This isn’t ideal, but it is what works best for the automation tool we are using.

In the wrong context, automating a user interface is a frustrating letdown. But in the right context, this strategy provides a safety net so developers can make changes and worry a little less that there are hidden problems.

Up Next

About the Author

TechWell Insights To Go

(* Required fields)

Get the latest stories delivered to your inbox every month.