A woman crouches under a kitchen sink, using a wrench to tighten exposed plumbing pipes, with a toolbox beside her as she focuses on the repair.
Editorial

Before You Scale AI in Customer Experience, Fix These 5 Things

8 minute read
Ricardo Saltz Gulko avatar
By
SAVED
Data, governance, workflows, talent, measurement — miss one, and your AI becomes a liability.

The Gist

  • AI failure is a readiness problem. Most CX AI initiatives underperform not because of weak technology, but because organizations lack the data, governance, workflows, and talent needed to support it.
  • The gap between urgency and capability is widening. Nearly all companies feel pressure to deploy AI, but only a small minority have the structural foundations to generate real value.
  • Winners build before they scale. The companies seeing results redesign workflows, enforce governance, and train teams first—turning AI into a compounding advantage rather than a risk.

There is a pattern repeating itself across enterprise boardrooms right now. AI budgets are approved. Pilots are launched. Vendors are selected. And yet, for the great majority of companies deploying AI in customer experience, the results are not arriving — or worse, they are actively damaging the customer relationships they were meant to improve.

McKinsey's 2025 global survey found that only 39% of organizations report any measurable EBIT impact from AI, and most say it accounts for less than 5% of total earnings. BCG's parallel research confirms the same divide: only 5% of companies globally achieve AI value at scale, while 60% report no material returns. Gartner projects 30% of generative AI projects will be abandoned after proof-of-concept, citing unclear business value, weak data, and escalating costs.

This is not a technology failure as I mentioned when 74% of AI programs fail. It is a readiness failure. The question in 2026 is no longer whether to deploy AI in CX — it is whether your organization has built the conditions that make it trustworthy and sustainable before you scale. This article gives you the diagnostic to find out.

Table of Contents

Part 1: The Readiness Gap Is Wider Than Most Boards Realize

The Urgency-Readiness Paradox

Cisco's 2024 AI Readiness Index, drawing on 7,985 senior leaders across 30 markets, captures the tension precisely. 98% of organizations say urgency to deploy AI has increased, yet only 13% are fully ready to capture AI's potential — down from 14% the year before. Nearly 85% believe they have fewer than 18 months to demonstrate results — a self-imposed pressure driving deployment decisions ahead of readiness ones.

What the 13% Do Differently

Cisco's "Pacesetters" are structurally different, not just technically ahead. 99% have a well-defined AI strategy versus 58% overall. 76% have fully centralized data versus 19%. 84% have end-to-end governance controls with continuous monitoring versus 24%. They are not ahead because they moved faster. They are ahead because they built the right foundations first.

The CX Function Carries the Highest Exposure

Gartner for instance found 85% of customer service leaders would pilot customer-facing GenAI in 2025, with most facing direct executive pressure. Yet 64% of customers would prefer companies did not use AI in service at all, and 53% would consider switching to a competitor that doesn’t. Deploying into an unready CX function is no longer just a budget risk. It is an active churn risk.

Related Article: AI Customer Service Splits in 2: Bots Handle Volume, Humans Handle Reality

Part 2: What Enterprise Failures Actually Teach Us

Four cases, 1 Shared Root Cause

Klarna projected $40 million in savings from its AI assistant in early 2024. By May 2025, its CEO acknowledged lower-quality customer outcomes and the company was rehiring human agents. What failed was not the technology — it was the absence of a quality framework holding AI to the same resolution standards as human service.

Air Canada's chatbot invented a bereavement fare policy that did not exist. The BC Civil Resolution Tribunal held the airline fully liable, rejecting the defence that the chatbot was a separate legal entity. McDonald's ended its three-year IBM AI drive-thru pilot in June 2024 after the system failed under real-world edge cases.

The Pattern Behind Every Failure

In every case the root cause was identical: confusing having an AI tool with being AI-ready. BCG's 10-20-70 rule explains why. Only 10% of AI value comes from the algorithm: 20% from data and technology, and 70% from redesigned processes and people. Most failed deployments invest in the 10% and skip the 70%.

Part 3: The Five Dimensions of CX AI Readiness

Run this as a self-assessment with your CX, data, legal, IT and frontline leadership present. Where you find genuine gaps, treat them as stop signals — not items to fix in parallel with deployment.

Dimension 1 — Data Fitness for Your Specific Use Case

Cisco found 80% of organizations report inconsistencies in data pre-processing for AI projects — barely changed in two years. The question is not whether your data is clean. It is whether it is integrated across CRM, service records, knowledge base, product telemetry and complaint history — and representative of your full customer base, not just the easiest segment. SAP has made this a non-negotiable prerequisite for its Business AI suite, requiring clean connected master data before any customer-facing AI capability is activated.

Practical action: Map every data source the AI needs. Fix gaps before the pilot, not during it.

Dimension 2 — Governance: Who Owns Every Output Customers See?

Only 31% of organizations have comprehensive AI policies, and just 24% have proper controls over AI-agent actions. In CX, governance means one named executive who can answer — at any moment — what the AI said, what data it used, and what the escalation path is if it was wrong. Air Canada made this a legal reality, not a best practice. Oracle's Fusion Cloud CX embeds AI actions within defined guardrails, requires human confirmation for high-impact outputs and maintains full audit trails — treating governance as a product feature, not a compliance layer, which enhances the final results customers wise.

Practical action: Name the accountable executive. Run adversarial testing before go-live, not after.

Dimension 3 — Workflow Redesign, Not Bolt-on Automation

McKinsey found the top 6% of organizations generating real EBIT impact from AI are three times more likely to have redesigned workflows than to have layered AI onto existing ones. BCG reinforces it: only 26% of companies have moved beyond proof-of-concept to tangible value, with people and process as the main obstacle. Ericsson rebuilt its escalation and resolution workflows before deploying AI in service operations — defining where AI aggregates context, where human judgment takes over and where handoffs transfer full interaction history. The AI became part of a redesigned system, not an addition to an unchanged one.

Practical action: Map your highest-volume CX journey. Redesign every step where AI changes the input, output, or handoff before selecting any technology.

Dimension 4 — Talent: the Supervision Capability Most Teams Lack

AI in CX creates a new requirement for human judgment: the ability to supervise, calibrate and override AI outputs in real time. Only 31% of organizations report their talent is at high AI-readiness, and among Cisco's Pacesetters 75% of employees operate at AI-proficiency levels versus the cross-company average of just 16%. BCG found employee adoption improves sharply — from 15% to 55% positivity — when leadership provides structured role-specific training rather than generic AI literacy. Samsung Electronics paired its AI-powered intent detection with agent training programmes specifically designed around the new supervision responsibilities the technology creates.

Practical action: Build explicit override authority into operating procedures — the permission and protocol for agents to correct AI outputs without managerial approval.

Related Article: Transforming AI in Customer Experience With Human Insight

Dimension 5 — Value Measurement Beyond Deflection

Deflection is the most reported AI CX metric and the most misleading. A deflected contact that leaves the customer confused is not a success — it is a churn signal. McKinsey found only 1% of executives describe their GenAI rollouts as mature from a measurement standpoint, even among those claiming positive ROI. Deloitte found that three-quarters of advanced GenAI initiatives meet or exceed ROI expectations — but only among organizations that defined outcome metrics before deployment. Track leading indicators — hallucination rate, CSAT on AI-handled tickets, escalation quality — and lagging indicators: revenue retention, NPS by cohort, customer lifetime value, cost-to-serve.

Practical action: Define the specific customer outcome you want to improve and build its measurement baseline before the AI goes live.

Here’s a **tight, CMSWire-style customary table** version that keeps your substance intact but makes it scannable:

CX AI Readiness: The Five-Dimension Self-Assessment

Run this before deployment. Gaps are stop signs — not parallel workstreams.

DimensionWhat It Really MeansKey Risk SignalPractical Action
Data fitnessIntegrated, representative data across CRM, service, knowledge, telemetry and complaintsData is “clean” but fragmented or skewed to easy segmentsMap all required data sources and fix gaps before pilots
GovernanceClear ownership of every AI output customers see, with auditability and escalation pathsNo single executive can explain what AI said or whyName accountable owner and run adversarial testing pre-launch
Workflow redesignAI embedded into redesigned journeys, not layered onto existing processesAI added without changing inputs, handoffs or decision pointsRedesign high-volume journeys before selecting technology
Talent readinessTeams able to supervise, calibrate and override AI in real timeAgents lack authority or training to challenge AI outputsBuild explicit override protocols and role-specific training
Value measurementOutcome-based metrics tied to customer impact, not just efficiencyOverreliance on deflection as success metricDefine outcome metrics and baseline before deployment 

Part 4: What Readiness Enables — The Compounding Advantage

Accenture's 2024 research found that companies with genuinely AI-led processes — redesigned workflows, governance, talent and measurement as an integrated system — achieve 2.5 times higher revenue growth and 2.4 times greater productivity than peers. Salesforce's own "Customer Zero Agentforce" deployment shows what this produces: after one year, its service agent handled 1.5 million support requests, its SDR agent generated $1.7 million in new pipeline, and Slack agents returned 500,000 hours to internal teams. Salesforce openly states that early versions required significant workflow redesign before results compounded — which is precisely the point.

Gartner projects that by 2029, agentic AI will autonomously resolve 80% of common customer service issues. The organizations positioned for that shift are not the ones deploying fastest today. They are the ones building the governance, data, talent and redesign capability that makes autonomous AI trustworthy at scale.

Learning Opportunities

Part 5: The Practical Path — Sequence Before You Scale

BCG and McKinsey consistently identify the same sequence among high performers. It is not a slow path — it is the faster one, because it avoids rebuilding under live customer pressure.

  1. Define one specific customer outcome and build its measurement baseline before any technology decision.

  2. Fix data foundations for that use case — connectivity, representativeness, governance — before attempting the next one.

  3. Name the governance owner and define adversarial testing protocols before writing a procurement brief.

  4. Redesign the workflows the AI will touch — including escalation and handoff design — before selecting technology.

  5. Build frontline supervision capability before scaling beyond a controlled pilot.

Ask these six questions in your next leadership meeting:

  • Can we name the executive accountable for every AI output a customer sees?
  • Can we experiment before going life with what we perceive will help our customers?
  • Do we have adversarial testing protocols before go-live?
  • Are we measuring resolution quality or only deflection?
  • Have we redesigned the workflows AI will touch?
  • Do our frontline teams ( Humans ) have the skills to supervise and override AI outputs fi something goes wrong?

If the answers are not clear, you have your readiness score. Every one of these is fixable — and fixing them first is what separates the 13% from the rest.

Conclusion: Readiness Is the Strategic Decision

The race in CX AI is not about who deploys first. It is about who builds the foundations that make AI safe, trustworthy and genuinely value-generating — for customers and for the enterprise. The 13% Cisco classifies as fully ready are not simply ahead on technology. They are ahead on the organizational conditions that make technology work.

For CX leaders, the readiness audit is not a delay. It is the most direct route to sustainable competitive advantage. Run it honestly. Fix what you find. Then scale with the confidence that your foundations will hold — because in enterprise CX, the pressure will come. The only question is whether you will be ready for it.

fa-solid fa-hand-paper Learn how you can join our contributor community.

About the Author
Ricardo Saltz Gulko

Ricardo Saltz Gulko is the Managing Director of Eglobalis, the co-founder and visionary of the European Customer Experience Organization. He is a global strategist, thought leader, and customer experience practitioner, perceptive design analyses creator for Samsung and his clients, with a focus on customer adoption, experience and growth. Connect with Ricardo Saltz Gulko:

Main image: Monkey Business | Adobe Stock
Featured Research