The Alme Platform vs. Watson: Why understanding is better than discovery

To the casual observer, the homes of the Three Little Pigs would seem to have a lot in common: each featured structural elements like walls, doors and windows, and each seemed to do a great job of keeping a pig comfortable. Their fundamental differences were less obvious, though, until the wolf put each structure to the test. With virtual assistant technologies, it is difficult to point to a list of software features and distinguish one solution from another. However, there are fundamental differences between them, and you can believe that the millions of questions asked by end users will put each agent technology through its paces.

One virtual assistant technology, digital discovery, follows along the lines of search – enabling a user to query a vast dataset. One of the foundational goals of these systems was breadth of coverage (recall), ensuring that if you were looking for information, you could find all the relevant documents. The advantage of such statistical approaches is that large volumes of knowledge can be made available with very little manual effort. These systems, though, were not designed to address the specific need of any one user in a given situation.

Historically, search engines have not been good at bridging the gap between the language, or terms, in which questions are expressed and the language that is contained in the searched documents. This is the primary thing that Watson has overcome – the ability to map relationships between data points, enabling it to go beyond common keywords and draw relationships between concepts.

But this is not digital understanding. In a way, IBM cheated when they pitted Watson in a Jeopardy match, as this is a game that provides a “fact” that is explicit and distinct.  This takes the primary challenge out of digital understanding – namely, dealing with ambiguity. Taking a search-like index and doing data lookups made the match an exercise in correlation rather than understanding.

IBM has not overcome the challenge of digital inference, and this is where Next IT has changed the natural-language processing game. Rather than attempting to build a statistical inference engine to resolve ambiguity, we leveraged humans and created a system by which human inference could be codified in to the system.

This gets back to the fundamental difference between the two systems. Next IT began with the design goal of emulating humans: a conversational approach that enables digital understanding within a focused domain. IBM has solved an entirely different problem, which positions them to be very effective for solutions such as domain-specific search or voice search applications.

This is not to take anything away from IBM – they have made great strides in natural language search and the ability to mesh the language of our questions with the language found in digital content, giving Watson the precision to return a single answer.

But to simply place an avatar in front of what is effectively natural language search will not achieve the resonance with end users that results from human emulation.

When it comes to truly emulating humans, we must leverage human inference – something that cannot be encapsulated today in an algorithm. Acting on the belief that all we need is “more data” to replicate human understanding is not going to make technology come across as less clumsy or unnatural. Interaction can only feel human-like if it has encapsulated and leveraged actual human reasoning in establishing its answers.

Rather than basing our solution primarily on the automated digestion of data, Next IT has instead made a conscious decision to develop tools and an approach that focus on accurately codifying human inference. Our tools take advantage of real human insight when evaluating intent and they use this information to generate the correlating patterns that enable them to respond appropriately. So much of human-to-human communication is inferred from our relationships, our culture, our experiences or our situation that there is substantial dependence on inference to arrive at the true intent of communication.

It is these tools that allow us to tailor our understanding at a very high degree of granularity. Our natural-language modelers can dig into very specific instances and leverage real human reasoning to tune our systems rather than depending on abstracted engineering techniques that attempt to tune an algorithm, in a generic sense, across an entire domain.

It is this fundamental difference in understanding that allows virtual assistants like Alaska Airlines’ Jenn and the U.S. Army’s SGT STAR to stand apart in the world today.