Small dog running ahead on a paved park path with its leash trailing behind, while a person in a coat jogs after it through a tree-lined setting.
Editorial

Generative AI Is Moving Faster Than Customer Data Privacy Readiness

3 minute read
Jonathan Moran avatar
By
SAVED
On Data Privacy Day, new research shows 62% of leaders cite privacy as a top GenAI concern; only 40% are investing in governance needed to make AI trustworthy.

The Gist

  • GenAI has collapsed the public–private data divide. Leaders increasingly trust GenAI systems built on opaque, semi-public data sources, even as consumers expect public data to be governed with the same rigor as private information.
  • Static consent models no longer work. GenAI use cases—from personalization to agentic decision-making—require dynamic, purpose-bound and revocable consent that evolves as models and data usage change.
  • Trust now hinges on governance, not promises. Transparency, identity protection and rights-based regulation are becoming baseline expectations, making AI governance maturity a competitive differentiator rather than a compliance afterthought.

In light of today's Data Privacy Day (Jan. 28), something is unmistakably clear: generative AI is rewriting expectations around how organizations collect, use and safeguard data. But while public awareness rises, organizational readiness has not kept pace.

A new study The IDC Data and AI Impact Report: The Trust Imperative found that data privacy (62%) and transparency (57%) rank among the highest GenAI‑related concerns, yet only 40% of organizations are investing in the governance, explainability and ethical safeguards required to make AI systems trustworthy.

So, what should marketers be focusing on right now?

Table of Contents

GenAI Has Blurred the Line Between Public and Private Data

One of the most striking findings from the research is the rapid collapse of the public–private data distinction: leaders overwhelmingly trust GenAI systems more than traditional AI—even though GenAI models ingest vast amounts of publicly sourced, user-generated or semi-public data whose origins and consent pathways are murky. Ironically, the models that leaders trust most (GenAI, and increasingly Agentic AI) are also those with the least transparent provenance and the highest risk of misuse.

Consumers now expect clear disclosure about where data comes from and how it's used. Therefore, public data should be treated with the same rigor as private data and organizations must begin treating public data with the same rigor, provenance checks and governance applied to private data.

Related Article: Inside the Privacy-First Approach to the Personalized Customer Experience

Consent Can No Longer Be Static

"Consent" in marketing as traditionally understood—pre-checked boxes and broad one-time permissions—are incompatible with modern AI systems. With GenAI however the same data point may inform personalization, model fine-tuning, synthetic training data generation or agentic decision flows. This makes static consent obsolete.

A good rule of thumb is dynamic consent orchestration, where permissions are:

  • Contextual – tied to specific use cases
  • Purpose‑bound – not blanket authorizations
  • Revocable in real time – enabling users to retract consent as models evolve.

This shift moves consent from a compliance artifact to a continuously managed governance process.

Identity Protection Is Now Foundational, Not Optional

GenAI can now replicate a person's voice, facial patterns, writing style, gestures and social behaviors with increasing realism. Combined with synthetic data generation, impersonation risks now extend beyond deepfakes to AI‑generated profiles, synthetic customers and manipulated risk signals.

My best advice to mitigate risk here is work with a martech vendor that has a governance framework that emphasizes bias detection, human‑in‑the‑loop controls and safeguards against synthetic misuse—all embedded directly within AI development pipelines.

As organizations adopt GenAI at scale, identity protection becomes as essential as encryption once was.

Transparency Has Become a Baseline Expectation

Transparency expectations are not only rising—they are rapidly moving into regulatory mandates. In my organization we treat transparency as a minimum viable requirement for AI deployment, not an advanced feature.

If AI systems shape decisions, you need to understand:

  • How the model was trained
  • What data it used
  • Whether synthetic data was involved
  • How outputs were generated and validated

Opaque AI systems are no longer acceptable, either commercially or regulatorily.

Regulation Is Shifting From Data Storage to Rights‑Based Governance

New privacy regulation rules are focused less on data storage and more on opt-out, training-data disclosure and limits on harmful synthetic outputs. Organizations must prepare for rights-based regulation—from opt-out mechanisms to synthetic content limits—by embedding compliance checks and governance maturity assessments into workflows.

Embed these compliance checks directly into workflows—not treat them as post‑hoc review tasks.

Trust is becoming quantifiable, and governance maturity is becoming a competitive differentiator.

GenAI has accelerated expectations faster than organizations are modernizing their privacy and governance foundations. Data privacy, consent, identity protection and transparency are no longer discrete categories—they are intertwined pillars of trustworthy AI.

Learning Opportunities

Organizations that treat governance as a strategic investment, rather than a regulatory burden, are proven to benefit. Those that prioritize trustworthy AI through governance, explainability and ethical safeguards deliver better customer experience outcomes and are 60% more likely to report double or greater ROI on their AI projects.

fa-solid fa-hand-paper Learn how you can join our contributor community.

About the Author
Jonathan Moran

Jonathan Moran, Head of MarTech Solutions Marketing, covers global product marketing activities at SAS, with a focus on customer experience and marketing technologies. Prior to SAS, Jon gained over 20 years of marketing and analytics industry experience at both Earnix and the Teradata Corporation in pre-sales, consulting and marketing roles. Connect with Jonathan Moran:

Main image: Михаил Решетников | Adobe Stock
Featured Research