The Gist:
The CMA is initiating a review of AI software. A recent announcement from the CMA revealed a massive investigation into generative AI.
Regulations are actively being set in place. Governments and businesses are working to put regulations in place before AI can get unruly.
AI investments continue to focus on customer experience. With consumer privacy at risk, ensuring customer safety is quickly becoming a top priority for many organizations.
The Competitions and Markets Authority (CMA), which monitors competition in the UK, will be participating in a large-scale review of artificial intelligence (AI). This review aims to understand how the implementation of AI can be supported alongside ethical principles: safety, security and robustness, transparency and explainability, accountability and governance, fairness and contestability and redress.
This review may later lead to changes in how businesses, customer experience and marketing professionals, at least in the UK, conduct and handle consumer data in the future. And North American businesses aren't off the hook, either (more on that later). The CMA's review aims to illuminate any holes when it comes to copyright, security and privacy in addition to providing guiding principles to protect consumers in the future when it comes to artificial intelligence innovation and usage.
"AI has burst into the public consciousness over the past few months but has been on our radar for some time," Sarah Cardell, chief executive of the CMA, said in a statement. "It’s a technology developing at speed and has the potential to transform the way businesses compete as well as drive substantial economic growth. It’s crucial that the potential benefits of this transformative technology are readily accessible to UK businesses and consumers while people remain protected from issues like false or misleading information. Our goal is to help this new, rapidly scaling technology develop in ways that ensure open, competitive markets and effective consumer protection."
Watching AI from the White House
The investigation comes at the same time the UK's American counterparts also explore regulation and responsible use and development of AI. AI, in particular generative AI, has exploded into the spotlight in the workplace and consumer world since OpenAI debuted the AI chatbot ChatGPT in November. The Biden administration plans to discuss risks and promote responsible AI with the CEOs of Anthropic, Google, Microsoft and OpenAI in a meeting this month.
According to The Washington Post, last Thursday Biden announced an investment in “trustworthy” AI. He also noted that many major tech companies have agreed to a public assessment of their own artificial intelligence systems to further protect their users.
With the rise of software like ChatGPT, the Biden administration wants to ensure consumer safety across the board. A $140 million grant will be used to expand AI research institutes in the coming months, which is especially pertinent as cybersecurity lawsuits continue to rise across the globe.
Related Article: FTC Issues Stern Guidance to Marketers on AI Messaging
Learning Opportunities
FTC Cracks Down on AI, Too
Other American authorities are keeping close eye on generative AI usage. This week, the FTC issued its latest warning to marketers and customer experience professionals on deceptive usage of generative AI in marketing and customer service scenarios. That comes after a Washington, DC policy group urged the FTC to investigate OpenAI and shut down development of GPT-4, the model that supports ChatGPT, earlier this year.
This is developing in tandem with the recent customer experience AI investments. According to the CMSWire State of Digital Customer Experience Report, in 2022 (before ChatGPT) a quarter of respondents had no AI applications in their CX toolset — but now, customer experience and retention is becoming the top priority when it comes to AI, according to a Gartner study.
CMA: AI Foundation Models Under Microscope
As for the UK's CMA investigation, the organization wants to support open, competitive markets, and its review "seeks to understand how foundation models are developing and produce an assessment of the conditions and principles that will best guide the development of foundation models and their use in the future."
This initial review will examine how the competitive markets for foundation models and their use could evolve; explore what opportunities and risks these scenarios could bring for competition and consumer protection; and produce guiding principles to support competition and protect consumers as AI foundation models develop.
Have a tip to share with our editorial team? Drop us a line: