Chatbots, AI, and PayPal

To start with, this isn’t really about PayPal, they just gave me a great example to share about how to employ AI to the least of its abilities. Due to a problem with PayPal, I had to get some support that the chatbot was less qualified to handle than a pig in a turkey bacon factory.
 
With or without AI, customer service chatbots are ubiquitous and are typically deployed to avoid human interaction with customers at the expense of customer service of any quality. 
 
Costco is a huge exception, and I’ll get to that soon enough. 
 
When I was finally able to navigate the system to reach a real human, I let the agent know that the error I was trying to have corrected didn’t piss me off, the chatbot that decimated customer service was what did it. The reply is the basis for this blog.
 
The reply was “The chatbot is still learning and it will get better.” WRONG!!! They aren’t looking at the true nature of training an AI chatbot, or improving a conventional customer disservice chatbot. The decision to put up a Berlin Wall between customers and customer service is purely a design decision.
 
Enter Costco. MY experience with Costco’s customer service chatbot was refreshing, to say the least. In fact, most companies employing customer service chatbots, AI or otherwise, should look to Costco to see how it’s done. By that, I do not mean how to train an AI chatbot, but rather how to educate those who ineptly deploy chatbots. There will always be questions that the chatbot can’t handle. But rather than running customers around in circles, ensuring the worst possible support experience, Costco uses the three-strikes-and-you’re-in method. That means that if the chatbot can’t help you after three tries, you are immediately transferred to a human. Costco didn’t need artificial intelligence to figure out how to do it right; they used organic, authentic intelligence to design a quality customer service chatbot. In doing so they created the ability to more effectively improve the training of an AI-based chatbot.
 
When, after three tries, the chatbot proves to be useless, then it’s time to actually analyze the problem in order to obtain quality data for use in training the AI chatbot. You use the foundation of knowledge and wisdom; you ask “What was the customer trying to accomplish?” “Why couldn’t the customer find the answer?” and “How do I fix this.?”
 
What? What was the customer trying to accomplish?
 
Why? Why couldn’t the customer resolve the issue using the chatbot? There can be a variety of reasons for this.
 
  • Terminology. Did you use terms that only an industry insider would understand? Did you use ambiguous terminology?  Was the terminology flat out wrong? 
 
  • Intuitiveness and complexity. Aside from the ambiguity inappropriate terminology can create, how intuitive is the interface. The correct answer may be provided, but an unintuitive interface may have effectively concealed it. Typically, this would be a problem when a chatbot directs the customer to a webpage that should contain the answer.
 
  • Bugs? Maybe all of the information was available, but for some reason, such as a simple logic error, the desired answer was not provided.
 
To create an AI chatbot you need to first know how to create a quality old-school chatbot. If you can’t do that then you’ll end up chatbot that understand bacon varietals, but not a damned thing your customer asked for.
 
Once you know how to create a chatbot, which includes a conscious effort to minimize customer frustration, then you’re ready to start training your AI chatbot, and you’ll need some quality data.
 
Therein lies the beauty of Costco’s system. If Costco is training an AI chatbot, it’s invisible to the customer because the system is designed in a manner that allows for quality training data collection while drastically minimizing any inconvenience.
 
After using Costco’s customer service chatbot, I don’t mind getting the chatbot first. If it helps, great. If not, then I know it won’t be an exercise is infuriating futility.
 
When it comes to training an AI system, the adage “garbage in, garbage out” is an immutable law of a silicon universe. This applies to both the quantity of data and the quality of data. If you feed a well-designed AI system garbage data, it will teach itself bad habits.  
 
So, is Costco’s three-strikes-and-you’re-in system using AI? Who cares? If they are not, then they figured out customer service by using organic, authentic intelligence to enhance the customer experience. If they are, they insulate the customer from the ignorant (poorly trained) chatbot.
 
The fundamentals of AI are constants. You need the right data, and enough data to train the system. The trick is in understanding what data you need, collecting it, and developing a deployment strategy that protects against the harm that a poorly designed and sadistic chatbot inflicts.
 
Frankly, I suspect Costco is more interested in using AI for applications that aren’t as simple as three-strikes-and-you’re-in.
 
If you’re going to inflict chatbots on customers then remember the three Qs.
 
Quality data acquisition = know what data you need to collect and do it.
Quality data analysis = understand what it means.
Quality deployment and training = train and deploy in the least destructive manner possible.
 
Woah! Did you think I was talking about AI when I shared the three Qs? Until you can apply the three Qs to a standard chatbot, you’re not ready to begin to think about employing an AI chatbot. 
 
To PayPal and all of the others using equally bad quality chatbots, learn the three Qs, practice them, and then feel free to deploy a customer service centric chatbot. 
 
Perhaps I’m a bit of a dinosaur. I wrong this blog without the use of ChatGPT or any other AI system.
 
Randy Abrams
Senior Security Analyst Emeritus
SecureIQLab
Share the Post: