Humanoid robots wearing headsets work at laptops in a modern call center, symbolizing AI-driven customer support and automated contact center operations.
Editorial

When Contact Center AI Starts Working Against Agents

4 minute read
Mark Speare avatar
By
SAVED
AI promised relief. Instead, many agents are juggling dashboards, prompts and metrics that quietly increase stress and erode judgment.

The Gist

  • Empower human judgment: Real-time guidance should be optional. Agents need the authority to override AI prompts without fear of penalty, since professional judgment is an asset at all times.
  • Metrics that matter: Traditional KPIs, such as average handle time, can work at odds with more complex goals like invoking empathy or achieving high-quality resolutions. Today, we must focus on metrics that genuinely reflect meaningful outcomes.
  • Reduce vigilance labor: When agents monitor AI dashboards while engaging with customers, cognitive load skyrockets. Automation should remove low-value tasks, not add invisible oversight.
  • AI as partner, not full substitute: Effective AI integration is not about fewer tools, but smarter priorities: summarizing context, handling repetitive work, and freeing attention for human connection.

When AI first entered the contact center conversation, the promise was almost utopian. AI would reduce cognitive strain, eliminate repetitive tasks and give agents the mental space to do what humans do best: listen, empathize and resolve problems that do not fit neatly into scripts. It sounded like relief was finally on the way.

That promise, however, is colliding with reality.

According to Omdia's 2025 Digital CX Survey, 75% of North American contact center leaders now believe their AI investments may be increasing agent stress rather than reducing it. Agents themselves confirm the trend: 87% report high stress levels, and more than half describe symptoms consistent with chronic burnout, including sleep disruption and emotional exhaustion. 

At the end of the day, AI may demonstrate remarkable proficiency in pattern recognition and probabilistic prediction, yet it cannot engender trust nor apprehend hesitation, frustration or relief in the manner that a human interlocutor can. 

Table of Contents

When 'Efficiency' Becomes a Drag

The core issue is not artificial intelligence itself. The real issue is the frivolous way most organizations deploy it.

In practice, what I've noticed, some peer companies have taken what might be called a "layering" approach: adding AI tools on top of legacy workflows without redesigning the underlying processes. It may seem forward-thinking, yet in practice, it often creates friction at multiple stages of the customer interaction.

Consider a familiar scenario. A customer spends 10 minutes interacting with a chatbot that cannot resolve their issue. They are asked to rephrase the same question multiple times, and then it is finally escalated to a human agent. By the time the call reaches a person, the customer is already frustrated. The agent, meanwhile, would be better off by reconstructing the context from fragmented transcripts while conducting a live conversation.

Related Article: The CX Reckoning of 2025: Why Agent Experience Decided What Worked

The Hidden Cost of Real-Time Guidance

These systems are often framed as supportive tools: live prompts, tone analysis, suggested responses. In theory, they act as a safety net. In practice, they introduce what psychologists describe as vigilance labor. The agent is no longer focused solely on the customer. They are also monitoring the machine.

An experienced agent now listens, responds, watches sentiment indicators, tracks compliance prompts and adjusts phrasing to satisfy algorithmic expectations — all at once. The cognitive split is subtle but relentless.

The problem deepens when the same system feeding guidance also feeds performance dashboards tied to pay, promotion or disciplinary action. At that point, prompts stop being optional. They become implicit commands. Ignoring them feels risky, even when professional judgment suggests otherwise.

Effective AI integration does not mean less technology. It means different priorities. When it comes to real-time guidance, agents must retain a clear right to ignore or disable prompts without consequence. Professional judgment should be treated as an asset, not an overrideable setting.

AI Design Choice vs. Agent Reality

How well-intended AI deployments can quietly increase stress, distort priorities and undermine the human side of customer experience.

AI Design ChoiceWhat Happens to AgentsWhat Leaders Should Rethink
Real-time guidance and live promptsAgents split attention between customers and dashboards, increasing cognitive load and vigilance labor.Make guidance optional, ignorable and consequence-free to preserve professional judgment.
Layering AI onto legacy workflowsContext reconstruction, repeated questions and frustrated customers land on already taxed agents.Redesign workflows first, then apply AI to remove friction rather than compound it.
Efficiency-first KPIs (AHT, scripts, throughput)Agents are told to show empathy but evaluated on speed and algorithmic tone scores.Retire metrics that reward speed over resolution and focus on outcomes that reflect understanding.
AI-driven performance monitoringPrompts feel mandatory when tied to pay, promotion or discipline, even when judgment disagrees.Separate coaching tools from enforcement systems to avoid turning “assistance” into pressure.
AI positioned as a human replacementEmotional labor increases as agents manage both customer frustration and machine expectations.Use AI as a partner: summarize context, handle repetition and free humans for connection.

Contact Center Metrics That Undermine the Mission

Performance measurement is another fault line to discuss.

The reality is that many contact centers still employ legacy metrics such as average handle time, rigid script adherence and throughput volume. These measures were designed for efficiency, not for understanding. AI, meanwhile, is often sold as a way to increase empathy, improve resolution quality and personalize interactions.

Agents usually recognize this kind of inconsistency straight away. They are told to slow down and connect with customers, but quickly reminded to speed up when calls take longer than expected. Furthermore, they are asked to show empathy, yet evaluated against automated tone scores that leave little room for nuance. Understanding customer experience holistically requires moving beyond these narrow measures. 

Learning Opportunities

Some metrics, then, deserve a quiet retirement. Organizations must be prepared to move beyond measures that reward speed at the expense of resolution. Tracking contact center trends can help leaders identify which KPIs actually correlate with meaningful outcomes. Otherwise, technology does not clarify priorities; it merely makes the confusion more "efficient."

fa-solid fa-hand-paper Learn how you can join our contributor community.

About the Author
Mark Speare

Mark Speare is a fintech professional with over 8 years of experience in B2B and B2C SaaS, customer success and trading technology. As Chief Client Officer at B2BROKER, a global fintech solutions provider for financial institutions, he focuses on enhancing the trading experience through client-centric solutions, scaling business processes, and building strong client relationships to maximise ROI. Connect with Mark Speare:

Main image: Roman Milert | Adobe Stock
Featured Research