Testing Next-Generation Digital Interfaces
Many organizations recognize the correlation between achieving business goals and adopting a digital strategy. Leading that transformation are new user interfaces that streamline and enrich the flow. But with that comes a challenge to the development team: needing to extend coverage in a tight sprint schedule.
New Digital Interfaces Are Revolutionizing User Relations
When the Royal Bank of Scotland first introduced Touch ID in its mobile app, the motivation was likely streamlined flows and an increased sense of security. The favorable response from end-users was just a welcome bonus.
This new authentication flow further streamlined transactions such as in-app purchases, driving accelerated revenue streams.
Another example of next-generation technology is the introduction of conversational interfaces in mobile apps, web apps, and dedicated devices such as Alexa and Echo. The proliferation of such interfaces is staggering: In a HubSpot survey, 74 percent of consumers say they used voice search in the past month, and daily use went up 27 percent over six months in 2016. In a PwC survey, 27 percent of respondents said they weren’t sure if their last customer service interaction was with a human or a chatbot.
Digital Interfaces Require a Digital Test Solution
Since the fall of 2016, Bank of America has put together a team of more than a hundred people dedicated to building Erica, a personal voice assistant inside its mobile app. “We realized that to do what we wanted, there would have to be a huge investment of time, energy, and resources to make it happen,” said digital banking executive Henry Agusti.
Indeed, such an achievement would involve multiple layers of technology and vendors, including cloud-based artificial intelligence and a natural language processing (NLP) solution.
High quality at the end of the sprint requires planning and implementing good coverage and repetitive testing. For these new interfaces, we need a lab that accommodates such testing and automation.
Taking the voice chatbot as an example, a lab should offer a complete audio-based environment to simulate the interaction, with tests in an ideal environment and a noisy one.
These tests should determine:
- The basic flow allows iterative entry and validation.
- The entry accommodates text strings (to be converted to audio) or direct audio (simulation of the noisy environment, for example). The validation accommodates for fast native/visual validation or recording the output and converting it into text
- Considering different skilled users, scripts can be built as a dictionary of entries and validations, possibly in an CSV file
- The flow can be executed as frequently as needed across different devices, enabling fast feedback to the developers
Continuing the voice chatbot example, we want to ensure reliable and frequent test executions so developers get feedback earlier. Tests should be prioritized into smoke or nightly regression tests. This is the agile testing methodology: new test cases developed at the sprint launch, validated, and added to the right cadence.
Brands are accelerating their digital innovation to meet business goals, and when these new interfaces are introduced, users respond favorably. The task of developing such interfaces that meet end-user expectations is not easy, but with the right approach and solution, it is certainly doable.
Amir Rozenberg is presenting the session Testing Digital Interfaces: Chatbots, Home Assistants, Touch ID, Facial Recognition, and More at the STAREAST 2018 conference, April 29–May 4 in Orlando, FL.