The Gist
- AI needs to meet a tougher standard. Research shows customers are less likely to forgive AI for making mistakes, especially in high-stress situations.
- Your AI investment should match your tolerance for failure. If AI is taking the place of employees, strive for similar (or better) outcomes.
- Contextual data enhances CX. Your data infrastructure matters — a lot. When AI has the context it needs to serve your customers, everybody wins.
When a Nor'easter led to thousands of cancelled flights earlier this year, customer service agents at one airline kept directing people to kiosks to rebook their travel. This should have saved the airline money and empowered customers, but the AI powering the kiosks couldn't help them and only created an even bigger mess.
Customers are willing to tolerate AI (if they even know it's AI), especially for routine issues. But when they're stressed out and dealing with high-stakes tasks — like trying to get home instead of being stuck in Terminal B for 10 hours — they're far less forgiving.
Table of Contents
- What Happens When AI Doesn't Deliver
- Humans vs. AI: The Empathy Gap
- How Customers React to AI Customer Service
- Your Investment in AI Should Be Determined By Your Tolerance
- Higher AI Standards Are Rooted in Contextual Data and Monitoring
- More Ways to Reduce the Risk
What Happens When AI Doesn't Deliver
When you replace the front door of your organization with an AI-powered kiosk or chatbot, that decision has a direct impact on your risk exposure, your brand reputation and your bottom line. "AI-powered customer service fails at four times the rate of other tasks," according to a Qualtrics report.
Sometimes a malfunction is simply annoying, but other times it has more serious repercussions. McDonald's shut down an AI test that let a customer order bacon on their ice cream. Air Canada had to honor a refund policy invented by their chatbot.
Consider Klarna, which laid off 1,200 and pushed AI-driven customer service. They saw some initial efficiency gains, but quickly found that people aren't always replaceable — at least not yet. "Sure, the AI could handle questions. But it couldn't handle nuance, refunds or loyalty." So Klarna went back to what worked and started hiring people again.
Humans vs. AI: The Empathy Gap
AI doesn't have a bad day. People aren't machines. Research shows that fatigue and other factors affect our performance. AI doesn't get tired, stressed or bored.
But customer service isn't just about productivity. It requires compassion, connection and empathy — yet only 42% of consumers surveyed expected an AI chatbot to understand their emotions. People understand that feelings are still inherently, fundamentally human.
You can improve customer experience by training AI to be more empathetic, but some customers may see this as inauthentic, which can lower trust. Simulated empathy carries a risk.
Related Article: Building Customer Trust — Statistics in the US for 2025
How Customers React to AI Customer Service
Editor’s note: Research shows customers hold AI to a different — often harsher — standard than human agents. Here’s what CX leaders need to understand.
| Customer Behavior Insight | What the Research Shows | Implication for CX Leaders |
|---|---|---|
| Three strikes and trust collapses | A University of Michigan study found that people will stop trusting robot co-workers after three mistakes, and that no apologies, denials or explanations could repair the trust issues. | AI error tolerance is extremely low. Early failures can permanently damage customer trust. |
| Algorithmic aversion persists | Even when AI is proven to be more accurate, people are often more likely to accept recommendations from humans, especially for more subjective requests. | Accuracy alone won’t win adoption. Perception and context matter just as much. |
| Customers want thinking, not feeling | Research shows that efficiency comes at the expense of perceived customer orientation in scenarios requiring emotional intelligence rather than problem-solving. | AI performs best in logic-driven interactions — not emotional recovery moments. |
| AI lacks social capital | A survey of hospital employees showed people are willing to accept more mistakes from humans than from AI. Researchers theorized that AI isn't as likable as humans, reducing forgiveness. | AI starts with a trust deficit. It must prove itself faster — and fail less often. |
| Human-like AI raises expectations | Customers respond more negatively to AI errors when the AI is more human-like. | Anthropomorphism increases scrutiny. The more human AI seems, the less forgiving users become. |
| Customers behave less ethically | Studies show people may be more likely to cheat when they're working with AI, due to lower perceived moral cost. | AI interactions may introduce new fraud, abuse, or compliance risks. |
If a human agent makes a mistake, your customers might complain — but they're also (hopefully) likely to stay with your brand. AI doesn't always get the same window for forgiveness.
Your Investment in AI Should Be Determined By Your Tolerance
Is comparing AI to human agents the right standard, a double standard, or simply fear dressed up as governance? That's obviously your call — but regardless, you need to consider your tolerance level for failure when a human does a job versus when AI does it.
For example, if you're thinking that replacing one full-time employee with a kiosk will save money and keep your customers just as happy, then your tolerance for AI failure should be set at the same level you had for that employee. If you're setting the bar lower for AI, you need to reconsider your priorities — especially if your primary goal is improving the customer experience.
Higher AI Standards Are Rooted in Contextual Data and Monitoring
Companies keep adopting AI because it has the potential to deliver significant benefits, like reducing workload for 70% of customer service agents, according to Capgemini. But we expect more than productivity gains from AI.
The explainability demands placed on AI are unprecedented. Organizations typically don't require the same level of decision auditability from human agents as they do from AI agents. But here's the kicker: when people want explainability from AI, they're not asking a governance question — they're highlighting a trust deficit. And that's a data issue.
If you're investing in AI, you also need to invest in the underlying data quality, contextual data infrastructure and real-time monitoring needed to make it all work as promised.
Customer service often requires current, situational, customer-specific context. Consider the customer at the airport kiosk. That AI platform needs real-time inventory data, airline policy context and the customer's loyalty status to rebook them.
More Ways to Reduce the Risk
What else can you do to implement AI while keeping your customer service scores up?
- Start with low-emotion situations. In high-emotion scenarios, AI's "human-like language triggers negative reactions," according to research. The same customers who are fine ordering burritos through an AI chatbot want to talk to a real person when they need a suit for their daughter's wedding.
- Focus on monitoring and visibility. If your chatbot hallucinates a policy or starts discriminating against customers, how will you know? Real-time visibility through endpoint monitoring lets you identify and resolve issues quickly, rather than waiting for mistakes to end up on social media or in court.
- Consider the cost of rehiring humans. In one survey, more than half of the companies that replaced people with AI admit it was the wrong decision. A hybrid human-AI model preserves human institutional knowledge and empathy while still benefiting from AI's efficiency.
- Improve the data that fuels the AI. Working on your data infrastructure isn't always glamorous (although it is to some of us), but it's a critical step before you spend millions on a new AI platform — and helps you avoid AI-washing.
You wouldn't leave a human agent completely unsupervised, without all the tools and support they need to serve customers — so why would you do the same for AI?
Learn how you can join our contributor community.