Chatbots are one of the latest trends in marketing and often improve speed of engagement. However, they can have their issues, and the technology is still developing. When should you not use a chatbot? What are the pitfalls? Here are some things to consider:
- A chatbot that keeps pestering users is going to drive users away. A discreet icon in the corner is good as a popup is likely to annoy site visitors. If one does go with a pop-up take steps to limit having the chat open every time the user loads a new page.
- Do not try to disguise your chatbot as a human. Chatbots are not an opportunity to try a Turing test (that your bot will surely fail) - they are tools for more convenient engagement.
- Always have an exit option. It's possible to get stuck in a chatbot loop. This is particularly important for in-app chat boxes. Web ones can often be easily closed.
- Provide a proper human escalation process so that people are not frustratedly trying to get attention. We have all dealt with phone trees where it takes five or ten minutes to get to a real person. One good option with current chatbot technology is to code the bot to recognize specific phrases such as "I need to talk to a human."
- Consider tone. Chatbots can sometimes say things which are not just incorrect but insensitive, such as referring to dead relatives or being perky when the caller or user is, say, making an insurance claim after a traumatic event.
The biggest issue is that chatbot technology is still highly limited. The vast majority of chatbots are fairly simple decision trees that work by identifying keywords. This means that they can return incorrect responses when people use words or phrases the bot isn’t able to decipher. The bots don't understand context, and they often do not record past responses, resulting in the bot "forgetting" what you said five minutes previously.
The second big issue is that smaller and newer companies may not have the data to code a chatbot that can cover more than very basics issues. For new companies, it might be best to hold off on building a chatbot and instead record conversations with a live chat agent.
Self-learning chatbots do overcome this, but they are still relatively rare and, of course, expensive to adequately develop. For example, IBM Watson, perhaps the most famous deep learning computer, charges a quarter cent for every API call. This is far beyond the reach of most smaller companies. Self-learning bots still suffer from the “Tay problem”. Microsoft launched a program on Twitter to learn about human language. Within 24 hours, trolls had trained the bot, named Tay, to praise Adolf Hitler.
For most companies today decision-tree chatbots are what is most feasible. When deciding whether to use one you should consider whether you have the right use cases and the data to support them. Chatbots work well when customers are making repetitive and fairly limited inquiries. For example, a chatbot may be able to direct you to the nearest store based on your location or complete your outstanding bill payment. Chatbots are also great for quickly gathering information that can then save time when the contact is escalated to a human. Another good use for chatbots is to hold a user's attention while the agent gets ready to respond, increasing apparent response time.
The most important thing is to keep chatbots in their place, provide an easy way to escalate to a human agent, and always make sure that it is easy to get out of the chatbot's interface. As artificial intelligence improves and comes down in price, chatbots may well be able to completely replace humans, but we are not at that point yet and may not be for some years to come. Read more about how ai-powered chatbots can be a faster way to engage. For more information on deploying chatbots or leveraging automation to enhance your customer service, contact infoedge today.