Virtual Agent Personality: The Other Responsive Design

In 1950, Alan Turing described what he termed an “imitation game,” in which an interrogator talks via a text interface to two entities, one a human and one a computer. Through conversation, the interrogator should be able to determine which he or she believes to be the human and which is the computer. Turing predicted that “in about 50 years’ time it will be possible to program computers [...] to make them play the imitation game so well that an average interrogator will not have more than 70 percent chance of making the right identification after five minutes of questioning.1

While the year 2000 has come and gone and we’re still (mostly!) confident in determining to whom we’re speaking when conversing online, the questions that we ask about what makes an interaction or response sound human or electronic continue to be fascinating. In 1950, “computer” was, in fact, a job fulfilled by a person. Thus, to be “like a computer” was a compliment to the generations of men and women who manned phones and field tents, doing the math for everything from the Manhattan Project’s research team to calculating the height of Mount Everest2. Now, however, the tables are reversed: “it is the human ‘computer’ that is relegated to the illegitimacy of the figurative [...] in the twenty-first century, it’s the human math whiz who is ‘like a [digital] computer’.3” In the same article from 1950, Turing wrote that “digital computers can be constructed, and indeed have been constructed [...] and that they can in fact mimic the actions of a human computer very closely.” How the world has changed since then!

We have an expectation, in 2015, that computers are, in many ways, perfect systems. They should have an immediate (or near-immediate) and programmatically correct response, while a human’s response may be fallible, slow and, for better or worse, able to be swayed by argument or emotion. For math equations, this kind of blunt logic is reassuring (your calculator won’t give you the wrong answer because it’s having a bad hair day), but for more personal online interactions, some semblance of the individual touch becomes prized.

It’s not feasible to imagine that human interlocutors could stand behind such massive systems as Siri, or Next IT’s own Jenn, SGT STAR and Ask Charter, but the expectation that these electronic interactions can still be personable, charming and human-like is a bar that we, as natural language programmers, continue to raise.

Designing the conversations and responses for a natural language modeling system, such as the ones we build at Next IT, requires that we take exquisite care to exemplify both the instant-responsiveness of a computer and the empathy of a human dialogue partner. We must be cognizant that while our Intelligent Virtual Assistants (IVAs, Next IT’s proprietary term for our “bots”) respond to thousands of users’ questions per day (or per hour!), they are doing so without direct supervision. We must be confident in our designs and their ability to assist with both pure fact-finding and the more intangible aspects of a conversation. The last thing we want to be is Clippy, who, while, technically helpful (and we could all use a reminder on how to use semi-colons correctly sometimes!), was also frustrating due to his tone-deafness regarding how and when you actually wanted or needed his assistance!

Eugene Demchenko and Vladimir Veselov, the creators of the chatterbot Eugene Goostman which beat the Turing Test in 2014, write that “when making a bot, you don’t write a program, you write a novel” and that an ideal process for composing responses be that, as a team, you collaborate on the bot’s persona.4 This intensive and hands-on approach to writing the responses that appear in our IVAs is fundamental to each team of content writers and natural language model developers here at Next IT.

The “personality” of our IVAs is apparent in the human-interest responses that many of them give: asking Emily, on the ImCovered.com site “want to go on a date?” results in her gently rebuffing my advances with “Sorry, I’m off the market. I’m happily married to my job,” while Jenn on AlaskaAir.com informs me that she’s an Aquarius, if asked about astrology. All of these interactions are created with our clients’ assistance and each one is customized based on the IVA’s intended purpose and domain.

Even more importantly, each virtual assistant’s persona permeates throughout its knowledge-base. This is where the real “novel-writing” comes into play — creating and refining the tone of an IVA’s interactions is an ongoing process that can vastly change how it and the user interact. Is this avatar going to be straight-laced and formal? Is it going to use contractions and conversational slang? Is it going to proactively ask for more information and take control of an interaction (in what Next IT terms a “mixed-initiative dialogue”) when necessary? Each of these determinations can result in a very different tone for similar conversation paths.

This type of subtle human emulation is not as flashy as some of the technical or data mining magic that Next IT’s developers are able to do. However, designing the interactions themselves — making the IVA a true partner in the conversation — can make a dramatic difference in how users interact with our technology. Our goal is not to merely imitate human interaction, but rather emulate it in a way that leverages the personal touch of the teams that build our IVAs and the vast power of 2015’s digital technology, in order to best help users.

While Next IT does not intend to deceive consumers into thinking that our IVAs are real people, we pride ourselves on being able to craft informative and helpful electronic conversations with a distinctly human touch.

Sources

1 Turing, A.M. (1950). “Computing Machinery and Intelligence.” Mind, 59, 433-460.

2 “Radhanath Sikdar.” Wikipedia. Accessed January 18, 2015. http://en.wikipedia.org/wiki/Radhanath_Sikdar.

3 Christian, B. (2011). The Most Human Human: What Talking With Computers Teaches Us About What It Means to be Alive (p. 11). New York: Doubleday.

4 Epstein, R. (2009). Parsing the Turing Test: Philosophical and Methodological Issues in the Quest for the Thinking Computer (p. 458). Dordrecht: Springer.