Testing Uncertainty: Strategies for Testing a Chatbot

Person testing a chatbot on a smartphone

Uncertainty has always been a key challenge for testers, whether it’s with the ambiguous definition of requirements or with unstable test environments. But testing a chatbot adds a completely new level of uncertainty to a tester’s life.

There are a lot of platforms and tools available for chatbot development, but what we lack is a standardized chatbot testing strategy. The way testing is performed on chatbots differs a lot from “traditional” testing, like for an app or web portal, due to the apparent randomness of a conversation with a chatbot.

When testing numerous clients’ chatbots and our own, my team experienced that it is impossible to anticipate and cover all the situations that can happen during a conversation with a chatbot.

As we introduced learning components to our chatbot, including AI, machine learning, and intent training, the chatbot evolved and changed its behavior compared to previous test runs. This increased the need for regression tests and complicated them at the same time.

There is no limitation on user input—any user can type anything to a chatbot, so functionality, security, performance, and exception handling need to be robust. Chatting with a bot, we learned the importance of real-time feedback in order to collect data about unexpected behavior and invalid data responses.

Some of the key parameters for testing a chatbot are the conversational flow and the natural language processing model, as well as intelligence, onboarding, personality, security, navigation, error management, speed, and accuracy of the given answers.

Key parameters for testing a chatbot

 

For testers to have a better understanding of working with chatbots, they have to apply their critical thinking to deal with uncertainty in their test objects.

Here are some techniques testers can use when dealing with a mature chatbot:

  • Advanced automation framework: An automation framework is crucial to test end-to-end conversational flow, natural language understanding, and self-improvement
  • Domain-specific testing: Testing specific products and services for the main business- and consumer-level benefits reduces the scope of testing
  • Real-time monitoring and KPIs: Key performance indicators (KPIs) for chatbot performance are different and should measure aspects like goal completion rate, self-service rate, AI and machine learning rate, and fallback rate
  • Advanced security framework: The security framework should include things like end-to-end encryption, two-factor authentication, user authentication, intent authorization, channel authentication, compliance validations, authentication timeout, and self-destructing messages 

Although testing new technology and applications is exciting, seeing the strategies and tools that have worked for us time and again fail in chatbot testing can be frustrating, even for well-seasoned testers. Taking into account these new and updated strategies will help testers more successfully test a chatbot.

Rajni Singh is presenting the session Testing Uncertainty—and a Chatbot Named Ginger at STARCANADA 2019, October 20–25 in Toronto, Ontario.

Up Next

About the Author

TechWell Insights To Go

(* Required fields)

Get the latest stories delivered to your inbox every month.