To start with, this isn’t really about PayPal, they just gave me a great example to share about how to employ AI to the least of its abilities. Due to a problem with PayPal, I had to get some support that the chatbot was less qualified to handle than a pig in a turkey bacon factory.
With or without AI, customer service chatbots are ubiquitous and are typically deployed to avoid human interaction with customers at the expense of customer service of any quality.
Costco is a huge exception, and I’ll get to that soon enough.
When I was finally able to navigate the system to reach a real human, I let the agent know that the error I was trying to have corrected didn’t piss me off, the chatbot that decimated customer service was what did it. The reply is the basis for this blog.
The reply was “The chatbot is still learning and it will get better.” WRONG!!! They aren’t looking at the true nature of training an AI chatbot, or improving a conventional customer disservice chatbot. The decision to put up a Berlin Wall between customers and customer service is purely a design decision.
Enter Costco. MY experience with Costco’s customer service chatbot was refreshing, to say the least. In fact, most companies employing customer service chatbots, AI or otherwise, should look to Costco to see how it’s done. By that, I do not mean how to train an AI chatbot, but rather how to educate those who ineptly deploy chatbots. There will always be questions that the chatbot can’t handle. But rather than running customers around in circles, ensuring the worst possible support experience, Costco uses the three-strikes-and-you’re-in method. That means that if the chatbot can’t help you after three tries, you are immediately transferred to a human. Costco didn’t need artificial intelligence to figure out how to do it right; they used organic, authentic intelligence to design a quality customer service chatbot. In doing so they created the ability to more effectively improve the training of an AI-based chatbot.
When, after three tries, the chatbot proves to be useless, then it’s time to actually analyze the problem in order to obtain quality data for use in training the AI chatbot. You use the foundation of knowledge and wisdom; you ask “What was the customer trying to accomplish?” “Why couldn’t the customer find the answer?” and “How do I fix this.?”
What? What was the customer trying to accomplish?
Why? Why couldn’t the customer resolve the issue using the chatbot? There can be a variety of reasons for this.
-
Terminology. Did you use terms that only an industry insider would understand? Did you use ambiguous terminology? Was the terminology flat out wrong?
-
Intuitiveness and complexity. Aside from the ambiguity inappropriate terminology can create, how intuitive is the interface. The correct answer may be provided, but an unintuitive interface may have effectively concealed it. Typically, this would be a problem when a chatbot directs the customer to a webpage that should contain the answer.
-
Bugs? Maybe all of the information was available, but for some reason, such as a simple logic error, the desired answer was not provided.
