Frameworks for Building Trust in Conversational AI: Brené Brown and B.R.A.V.I.N.G.

At Next IT, we think a lot about trust. It’s the core of any successful communication, whether within a company, our relationships, or even just our daily interactions with others in the world. Trust is the cornerstone of understanding and communication.

Likewise, trust is a necessary consideration when we build our digital assistants and AI systems. Users must be able to trust their interactions with intelligent virtual assistants, so how our systems build that trust is imperative and unique to every situation and deployment.

Fortunately, trust-building is a well-studied topic. Recently I’ve been thinking about Brené Brown’s (very human) framework for establishing trust, and how it might be applied to our work. Dr. Brown has become noted for her research and theories on trust and vulnerability, and is the author of bestsellers like Daring Greatly.

In her most recent book, Rising Strong, Dr. Brown proposes a structure for thinking about building trust, which she fits into the mnemonic “B.R.A.V.I.N.G.” For Brown, trust is formed through the essential elements of Boundaries, Reliability, Accountability, Vault, Integrity, Non-Judgement, and Generosity.

When we apply Dr. Brown’s framework to conversational AI, we can extract some valuable insights into how we can create intelligent systems that we can trust, especially as we become more reliant upon them.

Let’s take a look at each category and see how they can apply to our development of AI and rules for establishing better trust with users.


For Brown, understanding boundaries is the first step in establishing trust between people. We need to know what the limits are, both for ourselves and for the person with whom we’re interacting.

In the same way, boundaries are essential for the operations of AI. This applies not only to how a software should interact with users, but also in the work it does. We can’t expect AI to understand everything, but it can be very effective when it has a focused domain expertise. By giving it the proper boundaries of knowledge (e.g., understanding financial systems, travel restrictions, etc.), they are more effective and expert within those boundaries, and can provide help that users can trust.


Reliability for Dr. Brown emphasizes actually accomplishing what we say we are going to do, following through on actions and even availability within the boundaries set. That insurance and trust in follow-through is essential to any relationship.

For conversational AI, reliability is fundamental. Can users rely on the AI to do tasks correctly, to provide accurate information, and to be available to help when needed? Once reliability is established, then interaction with a digital assistant can actually become more trusted by users than conversing with a human. This is one reason we have seen continued rising satisfaction with self-service in customer care — our digital assistants and interactions are becoming reliable enough that human-to-human interactions now feel slow and complex by comparison.


Accountability is not only about recognizing your mistakes, but also correcting them. Crucial to trust between people, especially when it is broken, this is equally important in our interactions with intelligent software.

AI systems will inevitably fail at times, and when they do, they need to fail elegantly. Even more importantly, they need to be able to offer other paths to resolution. For businesses, this means that these systems must have human oversight and intervention baked into every level of the software.


In Rising Strong, Brown uses Vault to encompass the necessary sense of discretion that trust requires. Confidences need to be kept between trusted partners.

Vault translates quite easily and obviously to conversational AI development. Users need to be assured that their information is safe when using these technologies. This can apply to the basics of privacy and security, but also to sensitive information like medical records. Importantly, AI-based technologies like intelligent virtual assistants have demonstrated a greater degree of human trust with information and sensitive questions over the years. It is the absence of a human in this case that makes the AI more trustworthy.


Integrity relates closely to Accountability, but Brown takes the notion one step further. To her, Integrity is about choosing to do what is right over what might be easy, and the same is certainly true when we think about developing conversational AI.

Businesses may be driven by the efficiency of the bottom line, but the Integrity of your system is essential, and not just in the security sense. A smart AI solution with a poor user experience is the classic example. When the small things are wrong, we are not likely to trust the assistant with the big things.


In our interactions with trusted friends, we need to feel that we are able to ask for what we need, and not worry about how the other person will react or judge us.

While we might think non-judgement would be inherent for software, we’ve seen many instances of how software can also carry biases. Companies need to be cognizant of the potential bias they are authoring in their code. As AI becomes more personalized, the contextual assumptions they make will need to be increasingly accurate. Which brings us to Dr. Brown’s final essential element to trust…


Generosity ties to non-judgmental interpretations, but is also a matter of understanding intentions. It’s giving the other person the benefit of the doubt, not jumping to conclusions, and operating with an empathy and openness.

For AI, this notion can unfold within the very personality that we define for the assistant and how it interacts. It means building a generous context of intention, asking clarifying questions to ensure accuracy, and giving the user the benefit of the doubt. Intelligent systems should seek to augment human capabilities rather than replace them, and they require generosity in their interactions to truly serve us well.

Dr. Brown’s B.R.A.V.I.N.G. construct for trust provides some interesting ways for us to begin thinking about how we can better build digital trust. As we rely more and more on intelligent systems, we need to be able to trust our interactions with them.

Likewise, as companies begin deploying AI across their business for customer service, human capital management, and internal knowledge management, they need to ensure that those systems are being built with the idea of establishing and ensuring trust with the end user in mind. Without trust in our digital assistants, they can never reach the full potential of their uses.