Testing the Ethics of AI

Android with artificial intelligence

Artifical intelligence is becoming more and more widespread in the software world. The potential is mind-boggling: AI is now where digitization was a decade ago, touching and impacting every domain. The technology that started in niche areas, chiefly entertainment, has extended by leaps and bounds.

But before you think this is an ideal scenario with endless possibilities for reach and benefit, consider that AI is a double-edged sword. When AI is being used in situations involving sensitive personal data, such as health care, banking and finance, and real estate, security is of the utmost importance—and so are ethical implications.

How can testers play a role in mitigating risks?

Testers understand that automation is only as smart as we design it to be. This also holds true for AI and similar practices such as machine learning, image recognition, and natural language processing. Plenty of sci-fi films have examples showcasing what can go wrong when undue powers are given to robots, and AI is often seen as a core piece in humanizing technology. It’s up to testers to make sure AI is used responsibly—especially considering we’re already at a point where people are discussing whether AI will become powerful enough to be worshipped.

Such close monitoring is necessary not just because of the potential AI holds, but also to keep adverse effects under check. Just as ethical hacking has gained popularity, ethical AI solutions are on the rise. At a simplistic level, these solutions mainly focus on ensuring AI as a technology does not support immoral causes from societal, environmental, or political influences, and on limiting the power given to the AI algorithms we engineer.

Testers have a critical role here in vetting solutions, even from the ideation stages. Areas that have traditionally been reserved for human intervention due to high cognitive involvement, such as accessibility engineering, are now open to the technology touch.  A lot of learning—and unlearning—is needed for testers to be able to bring new perspectives about how solutions will impact stakeholders and users.

Primarily, a lot of investment in quality data feeds that cover positive, negative, null, and boundary values will make all the difference in the test effort and outcomes. Testers will need to employ live monitoring, connect current solutions and new players, and consider what implications third-party integrations with an app will have. Ethical negative tests may also need to be taken up in the right doses to maintain checks and balances on the adverse potential of AI systems.

Interestingly, we can also consider leveraging AI itself. Testers should automate tests as much as possible, making them smart and lean and enabling automation in areas that haven’t been possible until now, leaving more time for strategic manual tests.

As AI continues to be used in more and more cases, everyone in the software industry has a collective responsibility to build responsible and ethical AI systems. Testers have a huge role in this process, so start thinking about the implications.

Up Next

December 19, 2018

About the Author

TechWell Insights To Go

(* Required fields)

Get the latest stories delivered to your inbox every month.