Evaluating the IVA Experience, Part I: Human-Driven Evaluation

“What we observe is not nature itself, but nature exposed to our method of questioning.”

– Werner Heisenberg

We’ve said it before:  Intelligent Virtual Assistants (IVAs) should be purpose-built to address your business goals. The most successful IVAs are powered by intelligent platforms that seamlessly integrate with complex systems of record and communicate with end users through a conversational interface – but if they’re truly intelligent, they should also be capable of adapting to change and learning over time as business goals and end-user preferences shift.

Even the most sophisticated IVAs, however, are neither autonomous nor autodidactic. They require direction. Like the humans who build them, A.I.-powered IVAs thrive under the conscientious tutelage of capable teachers. At Next IT, we employ a creative mix of human-directed and machine-driven evaluative processes to ensure that our IVAs constantly adapt to the dynamic conditions created by today’s users.

Here’s why it’s important to do both…

Think about any analysis of your organization’s performance. You’re more likely to arrive at actionable insights when you balance the inputs of customer feedback and business stakeholders. Of course, business stakeholders cannot objectively evaluate the experience of the customer, who likewise cannot be expected to base their evaluation on business goals. Without both perspectives, you’re generating incomplete insights.

Stakeholders can, however, ask the following question to guide their evaluation process: “Given what I know about my business and my customers, did the IVA provide as much value as possible?” This kind of question is effective because it adequately distinguishes between experience issues that are related to the IVA and those that are outside of the IVA’s control.

Consider a scenario in which an IVA user is frustrated because no tickets are available for their desired travel itinerary. Analysis of the IVA/customer conversation would likely demonstrate that the user is noticeably upset and dissatisfied. If our question is: “Did we satisfy our end-user?” the answer based on an evaluation of the conversation would say no.

If we evaluate this interaction by asking the question: “Given what I know about my business and my customers, did the IVA provide as much value as possible?” we’ll find that the IVA provided the best experience possible given the scenario, even if the end user did not achieve the initial desired outcome. In this case, the negative feedback provided by the end user is not a good representation of the performance of the IVA, as it doesn’t account for the fact that the tickets were sold out.

Furthermore, by conducting a multi-faceted analysis, you can uncover opportunities to satisfy customers even when you can’t give them exactly what they initially wanted. Imagine if in the above scenario the IVA not only informs the user that the tickets are sold out, but also assists them in finding an alternative itinerary. We uncover these insights constantly when working with customers to deploy their IVAs.

When paired with machine analysis, human-driven evaluations tell us how well an IVA is performing given the business goals, your users’ needs, and fluid conditions in the real world. The result is a 360-degree review of the IVA’s job performance that generates continuous opportunities for increasing its business value.