An Understanding of Next IT IVAs

Next IT IVAs

“So, what do you do?” It’s a question that most Next IT employees have a love-hate relationship with. For some of us, a simple answer can suffice. I usually say, “I’m a trainer at a software company,” but that’s often followed up by, “Oh, what kind of software?” Then the conversation gets a little more interesting. Ideally, I am able to pause, take a breath, and say something like, “At Next IT, we create Human Emulation Technology that our clients use to host Intelligent Virtual Assistants that provide great customer service through normal conversation and interactions.” The response is usually, “WOW!” But if, instead, my explanation produces a blank look, I usually say something like, “You know how you can click on a ‘live chat’ link on a lot of websites? We make software like that, but instead of talking to a real person, you’re talking to artificially intelligent software, and it’s way better.” I hate answering that way, because it really doesn’t do justice to all that we create at Next IT.

For those of us who teach (or have taught) our IVAs to understand end users’ behavior and natural language inputs, the answer to “what do you do?” is more interesting still. We have high expectations of the IVAs that we send out the door to interact with the world: they not only have to look good and sound good, they have to figure out who you are and what you’re saying. This is no easy task! But it is at the heart our natural language team and drives their work every day: based on the (sometimes minimal) details we know of the user’s profile and behavior and the (often convoluted) words they use to convey their intent, how do we get that user the exact single answer that they need, immediately, every time?

It’s pretty exciting to be able to work on the “guts” of such a complex piece of technology. I tell the Natural Language Model Technicians I train that we should think of it as if we’re raising a human child: first we have to teach her to understand words, then ideas, then complex combinations of ideas. It’s not going to be very effective to teach her rules like, “every time someone uses the words ‘flight’ and ‘book’ and ‘need,’ say ‘Here’s the page you’re looking for,’ and send him to the form to purchase a flight.” First of all, it’s obviously impossible to catalog every combination of words a user might enter and force a response. This type of question mapping is a no-no, and one of the major differentiators between Next IT and competitors. Second, it’s quite easy to think of a situation in which that rule could go terribly wrong. (How about, “I left my book on my flight yesterday and I need to get it back ASAP.” Oops!) Instead, let’s teach the IVA our proprietary markup language that allows her to understand the true, underlying intent that her users present her with. After all, most humans know what you mean when you say, “Book a flight from GEG to PDX, leaving at 9 PM on 4/28/12 and returning at 8 AM on 4/30/12.” And most humans understand that you’re seeking to complete the exact same action when you say something like, “help!!! I need to fly from Spokane to Portland tomorrow night and get back on Mon. morning in time to make it to back to Spokane before 10!” We ensure that either way, our IVA understands that you intend to book a flight on certain dates and times and assists you through the process with ease and a human touch. That’s no easy task.

So, what do we do? We join the grand tradition of building another persona outside of ourselves that will continue to work and interact even after we’re gone – a legacy of service that we can be proud of. There’s not always a short answer, but it’s crucial that we are able to articulate it: it’s the reason that we come to work each and every day.