While the benefits of chatbots are extensive, my recent experiments with them have highlighted that making a good one isn’t as easy as it might seem.
The proliferation of AI-powered chatbots across digital products sends a strong signal: customers want to engage in conversations with technology to have their problems solved.
When designed well, they provide an engaging and more natural way for users to interact with technology through conversations. They tend to be available 24/7 and have access to the complete set of information about products and services.
But even though new cohorts of chatbots boast ultrashort response times and seem near omniscient, we’re often left unimpressed with how the conversations went.
First, even with the latest available technology, it’s hard to build a conversational chatbot if you don’t have a rudimentary understanding of what makes a conversation effective.
Spoiler alert: it’s not the quantity and breadth of AI-generated answers. And it certainly is not a quirky sense of humour, or a sleek avatar slapped on top.
Secondly, regardless of the technology being used under the hood to drive the conversation, people’s behaviours and needs have not fundamentally changed.
Therefore, a chatbot design process must be information-led and based on the principles of a human-centred approach to design.
Thirdly, as you develop your chatbot, your north star should be building users’ trust in it, which will then reflect onto your brand. Once again, that trust is not a product of technological investment, but of intentional decisions made by designers.
Rules of conversation, human-centred design, building trust... You see where I’m going with this. These are not challenges handled by technology. This needs to be sorted by humans.
Let’s discuss how you can address each of the challenges below.
When we talk to bots, we usually expect them to communicate with us in a way that resembles a conversation with humans. The better a chatbot can do that, the more likely you are to take it seriously, and perhaps even enjoy interacting with it.
Conversation designers have quite a few tools under their belts that allow them to create that human-like impression in chatbots.
Some of the tools are the concepts and frameworks derived from the linguistic field of pragmatics, especially the cooperative principle, and the maxims of conversation introduced by Paul Grice, philosopher of language, back in 1975.
They explain how every effective communication needs to follow certain norms, and chatbots are no exception to the rule.
If you want your chatbot to hold successful conversations, it needs to be crafted to follow these principles:
Therein lies the challenge with popular AI chatbot tools like ChatGPT. Unless you specifically prompt them to be succinct and stay on topic, they will typically overwhelm users with lengthy prose that presents a diversity of views, a generous historical background, and a series of qualifying statements which nobody is interested in.
Even with the so-called data store agents, the level of reliability of answers tends to be pretty low and the risk of them going off track and generating blatantly misleading answers is high. The latest example is the now infamous New York City chatbot.
In practical terms, ensuring chatbot’s compliance with the four maxims is achieved by development teams working hand in hand with conversation designers, and the outcomes produced by the bot are monitored on an ongoing basis.
If you’re planning to build a chatbot but have no budget for design support, at least keep those maxims in mind before you deem your bot ready for deployment.
There are a few questions that companies should ask themselves before they attempt to build a chatbot:
Having considered the questions above, do we still think we need one? If you’re sure, several other considerations must be addressed to make sure that your chatbot is successful:
These are all questions that can be answered easily when chatbot development follows tried and tested UX design processes.
Not only are they built around extensive research into user needs but they also come with methods to ensure the accessibility, usability, and effectiveness of the final product.
It’s a safe bet that following UX best practices, rather than gut feelings or assumptions, will lead to the creation of a superior conversational solution – or any digital or physical solution for that matter.
In the world of business (and healthy long-term relationships) trust is everything.
Building trust in your product or service should be top of your list of priorities. You can achieve that by delivering reliable and carefully crafted solutions that consistently provide value to users.
In relation to chatbots, their greatest value lies in providing correct, up-to-date, and relevant information. This is particularly true in the domains where both stakes and people’s emotions are high, such as healthcare, finance, and legal services.
Trust is, of course, a complex and fickle phenomenon. When your customers have a history of positive experience with your brand, they’re likely to view your chatbot as trustworthy. That means they will be more inclined to engage with it and rely on its answers.
Now, if your chatbot can’t quite deliver that, the trust users have in it will promptly evaporate and, by extension, their overall perception of your brand will also suffer.
Let me reiterate the vital message: to build trust, chatbots must reliably provide accurate information.
Another important component of trust is transparency. Make sure your chatbot makes it clear from the start that users are interacting with an AI or machine-generated responses rather than a human agent. This helps manage expectations and prevents any feelings of deception. Your chatbots should also be clear about its capabilities, limitations, and the information it might require from users.
If a chatbot requests access to specific user data, it should explain fair and square why this information is necessary and how it will be used, stored, and protected. There’s nothing worse than an AI engineered to be sneaky under the guise of impartial politeness.
We all make mistakes. It’s how you handle them that matters. That goes for chatbots, too.
Just like in human-to-human conversations, users expect chatbots to gracefully handle mistakes or misunderstandings.
When your user provides an unclear or ambiguous query, your chatbot should request clarifications rather than offer an irrelevant or incorrect response – just for the sake of saying something.
In fact, one of the hallmarks of the most advanced tools is their ability to guide users back on track after an error.
How do you implement all these recommendations in practice? There’s no easy route.
Ticking all the boxes above will require careful planning, considerable effort at the design stage, thorough testing, and careful monitoring when the tool goes live.
Sounds like a lot of work? Absolutely! After all, we all know that having authentic and engaging conversations can be challenging even for human beings.