Use Crowdsourcing as a Shortcut on the UI Test Automation Journey

Crowd

If you run a web or mobile application with a human-facing UI, you will want to conduct end-to-end tests through the UI. In the sepia-toned days of big semiannual releases, a dedicated manual QA team could perform all end-to-end testing, but we don't have that time in the brave new world of agile.

Considering humans are too slow, let's automate all the things!

UI test automation often starts with using Selenium IDE in record/replay mode. The replay can be faster than human testing speed, and the tests can run 24/7, but the test cases are notoriously hard to maintain. The smallest UI change forces the tester to rerecord the test sequence from scratch. New players in the space of record-and-playback tools promise increased reliability via fuzzy matching to UI elements, but it's early days.

The next step is coding the test cases in a bona fide programming language with Selenium WebDriver bindings. A custom-coded layer of indirection known as the Page Object model will control the mapping from user actions to UI implementation, insulating the test cases from minor changes in the app. However, your test engineers now need to have solid software design and coding skills.

Automated UI tests are famously flaky, as they have a hard time telling when it's okay to interact with the app. In record/replay tests you'll find "sleep" instructions sprinkled between the test steps; longer waits can make the test less flaky, but, well, longer.

The WebDriver-based tests do better: The test code can wait for certain elements to be loaded before trying to access the page, or listen for "ready" events from the app. But your front-end engineers' cooperation is required to add the right flags or events to the UI. And your tests get complicated and much less readable than manual test specifications used to be.

More levels of indirection to the rescue! Let's use a behavior-driven development framework like Cucumber to write test steps in English (standardized to Gherkin format) on top of Selenium test code.

All this complexity stems from tying the tests to the application via the UI page source, effectively turning the UI into an API. We might wish for a solution that ignores the alphabet soup of HTML/CSS/XPath and simply accesses the UI like a human does, but machine learning is nowhere near that level yet.

How about going back to humans?

My company automates end-to-end tests with crowdsourcing as the execution engine. Our quality engineers define scripted test cases in plain English. When we kick off test builds in our CI system, they run in real time through a crowdsourcing provider that prescreens the crowd for testing skills. The platform uses machine learning to arrive at trustworthy pass/fail decisions.

We have found our tests to be at least as reliable as good Selenium tests, faster to write, easier to maintain as the app evolves, and quicker to debug the failures. Crowdsourcing is a shortcut to end-to-end testing that may work for you, too.

Daria Mehra is presenting the session What to Do—Develop Your Own Automation or Use Crowdsourced Testing? at STARWEST 2017, October 1–6 in Anaheim, California.

Up Next

About the Author

TechWell Insights To Go

(* Required fields)

Get the latest stories delivered to your inbox every month.