Where to Begin with Test Automation
My introduction to test automation was a manager handing us a UI automation tool. The company I was working for at the time had hired an outsourcing firm to build an API using Ruby, and we were supposed to use it to drive our product in Internet Explorer.
That project was a failure. While we automated test case after test case, the nightly test suite runs were failing at a rate of 60 percent or higher. No one trusted the test results, and soon, developers didn’t even bother to open the test report emails.
We had made a bad choice about where to start automating.
The user interface for many products is constantly in flux. The last company I worked for full-time was building a platform for advertisers. Each week, the development team added new pages to the UI that affected how other pages worked. A little less often, we took feedback from users to improve the usability and flow. And, every once in a while, the team would play “chase the trend” with JavaScript libraries and jump from Bootstrap to Angular and then to React.
Our manager wanted some UI automation. I built a few small tests that ran consistently for a week before needing to be refactored in order to cope with the changing user interface. Luckily, our manager got the point, and we didn’t build much past a small smoke test.
The UI wasn’t a good place for us, but we did have a REST API.
We ran another experiment, where I worked closely with a back-end developer. While she was writing code, I was stubbing automated tests in a JavaScript library. Each time I got to an assertion, I’d ask questions about data types, data formats, characters that should be allowed, and even user experience. These questions would guide the developer to make better choices about how to implement an API change, and that would help me write more relevant tests. By the time the feature was ready to check in, we had some test automation and the change had been exploratory tested.
Responsibilities changed once the tests were live and running in our continuous integration system. Our development team was invested in the API tests, which ran several times a day and provided important information. When a test failed, the developer that broke code or the test would fix it.
Compare that to the normal UI automation flow. When a test breaks, someone on the test team is assigned to discover whether the failure was caused by a bug in the software, a bug in the test code, or a legitimate UI change. Next, they document the bug and wait for it to be prioritized and fixed. The test failure is completely detached from the development of new code.
If you are wondering where to start automating, the answer is usually as close to the code as you can possibly get. The farther you get from the code, the more you expose yourself to issues from change. How much of your UI automation project could be done more effectively—and faster—at the API or lower?