Accessibility Testing: Should You Use a Manual or Automated Approach?

Accessibility is not a matter of chance; it is a matter of choice. Developing an accessible website takes effort and intention, and a complete evaluation of product accessibility is important.

A product’s quality goals around accessibility can be fulfilled using two approaches: manual evaluation and automated evaluation. Manual evaluation is typically carried out by an accessibility test expert who is technically aware of all the required accessibility compliances and barriers. This approach helps provide end-to-end testing coverage of any website.

In addition to such test experts, bringing in real users to work with the core team can streamline the test effort. Assistive tools and technologies such as screen readers and magnifiers can help them interact with the application in a realistic setup, uncovering real-time challenges. This paired team can also bring a concerted effort in ensuring accessibility mandates are met and regular audits are arranged to ensure product compliance with required guidelines and standards.

The downside of this approach continues to be the lack of trained testers. Accessibility is still not seen as an exciting space to build one’s career in, compared to other areas such as functional, performance, or automation testing.

The other downside is the time is takes to validate an application manually, especially given that real visually impaired users may take a long time to explore the application in its entirety. In a development environment where time is often crunched, this could be a deterrent in achieving the required level of test coverage.

To aid with these challenges, automated tools come in handy. They significantly reduce the time and effort in the evaluation process. Such tools are available in abundance, be it open source or paid solutions, and offer quick product evaluation and accommodate continuous monitoring for an application’s accessibility commitments.

However, no single tool is able to fully cover all accessibility requirements, as the tools run on predefined algorithms as opposed to analytical thinking. Also, while they largely aid the testing process, they are still not foolproof. These tools can churn out false positives that need additional manual evaluation. And in terms of coverage, their intelligence provides only limited visibility. For example, the tool can raise a flag if there is no alternative text for a graphic image, but it will not raise a flag about incorrect alternative text (say, where a tiger has been tagged as an elephant).

Both manual and automated approaches have their strengths and weaknesses. Automated tools are valuable aids in supplementing the test process, especially where time is an issue, but it can’t replace manual testing entirely. Manual evaluation continues to dominate the world of accessibility testing in terms of coverage, accuracy, and bringing a realistic touch.

Up Next

About the Author

TechWell Insights To Go

(* Required fields)

Get the latest stories delivered to your inbox every month.