Short-circuiting a chatbot: Yelling “representative”

Customer care/service/support is a natural application domain for chatbots, where they can be used to replace or augment existing Interactive Voice Response (IVR) systems. Naturally, we must ask ourselves what we can learn from decades of experience with IVR systems, even as IVR technology has matured and embraced advances in natural language understanding and speech recognition.

Given the current nascent state of the conversational dialog management technology, chatbots are likely to face a problem common to most IVR systems — users yelling “representative” or “associate” or “agent” in attempt to bypass the bot. At what point should the user be allowed to do this?  We are talking about a situation where the user has not engaged the bot to a sufficient level, to even give the bot a chance to solve their problem. The user just wants to talk to a live human. Why?

Because, in most cases, current IVR-type systems force the user to follow some pre-determined scripted path. The users know that, and they don’t even want to venture down that path. They want to “short-circuit” the bot and go straight to a live person. As IVR systems get better, the navigation has become less rigid. Still, a user is likely to feel that they are better off talking to a human. Maybe they feel that their problem is unique and unlikely to be solved by an automated system. Perhaps they have had bad experiences with IVR systems in the past, and just don’t want to bother trying the newer ones, even though the warm and friendly IVR systems says “I am a new system that can converse in natural language. So, please tell me in a few words what you are trying to do”. “Uh, hmm, representative!, associate!, agent!!”.

Current IVR systems are designed to at least get sufficient information from the user so as to route to the correct live agent. So, the dialog is limited to determining “intent” for call routing purposes. But even this can be bypassed if the user is sufficiently determined. Very few systems currently refuse to allow the user to continue unless they provide this information. “I see that you want to talk to an agent. But, to get you to the right representative, I need some basic information.” “Representative!, associate!, agent!!”. “Sorry, I didn’t understand that. Please tell me what you are calling about. You can say, for example, customer service, billing, tech support…” “Representative!, associate!, agent!!” At this point, most IVR systems will say “Alright, please hold for an agent” and transfer the user. Some systems will keep trying to get at least some information from the user. A few will just hangup on the user, or say “Please, hangup and try again”.

How do we solve this problem? First, users need to gain confidence that the chatbot is able to assist them in a manner similar to that of a live agent. To do this, they should first give the bots a chance. Bots often feature learning through feedback, so with sufficient exposure to users, they will learn to do better. But how do we prevent the users from short-circuiting the bots without giving them a fair shake. This jump-starting process requires business/economic and policy incentives. If the number of live human agents is reduced (which is a common objective of call center automation), the users will find themselves in long queues waiting on hold for an agent. This is a common incentive to return back to the chatbot and give it a chance. Another option is to force users to pay for live human assistance; this will pass on some of the higher costs of human agents to the users. A third option is to limit the hours of operation of the support agents, and emphasize that the chatbots are available 24/7.

In employee-to-employee (E2E) applications within an enterprise, users seeking support can be subjected to policy requirements that mandate usage of chatbots before they are allowed to contact a live human. Example of such applications are support of field technicians and technical/administrative support for employees. Experienced field technicians who call in seeking support from live agents particularly abhor rigid rules-based IVR-type systems. They often know exactly what they want, and hence seek the flexibility to perform the desired action (often involving a back-end enterprise system) without having to go through a detailed menu featuring a checklist of routine items. Hence, they try to bypass rigid IVR-like systems that follow a pre-determined script driven by rules-based enterprise systems. A chatbot that features a sophisticated conversational AI-based dialog system should allow such experienced users the flexibility to express what they need in a simple manner, so the dialog can be kept short and pleasant. A policy mandate (from supervisors and management) to try the chatbot first enables the requisite confidence-building to happen.

In customer-facing applications, where the chatbot is exposed to customers external to the enterprise, the jump-starting process is trickier. The reputation of the business is at stake; unhappy customers can be very costly to any business, especially in this age of social media. There may also be legal/regulatory restrictions. One solution is to trial these chatbots with a carefully selected set of customers who can then be counted on to virally spread the positive experience. Today, businesses of all stripes seem to rushing to deploy customer-facing chatbots without a seemingly careful consideration of how this new support channel ties in with existing customer care channels including IVR. Many of today’s bots are simple transactional systems with very limited dialog capability. However, as the applications mature, businesses will find that fragile dialog systems break the trust that customers place on chatbots. This will compromise the ability of businesses to enable chatbots to learn through the powerful feedback mechanisms that machine learning and artificial intelligence technologies enable.

One solution to the short-circuiting problem is to allow the chatbot to make the decision to gracefully hand-off the conversation to a live agent. Chatbots have an edge over existing IVR systems in that they feature NLP and AI algorithms that can assess sentiment of the user in addition to objective metrics on performance of the chat. The chatbot can make a case-by-case judgement on how persistent it should be, before bringing a live agent on the line. Leveraging information about the user based on history of previous calls as well as other profile information would be valuable in this context. Being “stateful” in the sense of keeping track of previous recent transactions is one way to gain the user’s trust. “I see that you called in earlier this morning about a problem with your service. Are you still experience the same issue?” is a reassuring way of beginning a conversation with a customer who has been calling repeatedly about a problem.

A careful review of human-factors and human-computer interfaces as they apply to existing IVRs is necessary before full-scale deployment of dialog chatbots that feature truly conversational AI. Otherwise, we are likely to repeat the mistakes made with IVRs. Many conversational flows for chatbots, engaged in enterprise use-cases, involve complex process flows that resemble a multi-stage IVR tree. Even if the chatbot is capable of taking the dialog to completion, the user is unlikely to stick around till the bitter end. Why would they? It is easier to short-circuit the bot by yelling “Representative!, associate!, agent!!”.

Advertisements

About VC Ramesh

Artificial Intelligence consultant. Current focus is on chatbots, more specifically, vertical conversational bots using deep learning (tensorflow).
This entry was posted in Uncategorized. Bookmark the permalink.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s