OpenAI is having too much fun with generative artificial intelligence, apparently.
First, a group of tech and business leaders on Tuesday, March 28, encouraged everyone to shut down AI innovation more powerful than OpenAI's GPT-4 model.
And today, something directly targeted at OpenAI itself: The Center for AI and Digital Policy (CAIDP), a nonprofit, research organization in Washington, DC, filed a complaint with the Federal Trade Commission (FTC) charging that OpenAI’s recently launched GPT-4 product violates federal consumer protection law. The complaint calls OpenAI's GPT-4 model biased, deceptive and a risk to privacy and public safety.
And the CAIDP wants the FTC to investigate OpenAI and shut down GPT-4 product development.
That's just this week. What's next?
Was it too good to be true? Was the revolutionary, fastest-growing-ever chatbot doomed since it debuted Nov. 30 and upended the creative psyche of marketers and customer experience professionals yearning for better and more efficient content, campaigns and customer data management?
Not So Fast: Nothing's Stopping OpenAI and GPT-4
For starters, though, here's the facts. Nothing is stopping OpenAI — or any other AI development — for now. The news on the Future of Life Institute letter calling for the halt of giant AI experiments is just that: a letter, no matter which cool tech people and business bigwigs have signed it.
And today's news from the CAIDP is just a request of the FTC, albeit a provocative one. It has asked the FTC to open an investigation and then to suspend the further deployment of GPT commercial products until the company complies with FTC guidance for AI products.
“The FTC has a clear responsibility to investigate and prohibit unfair and deceptive trade practices. We believe that the FTC should look closely at OpenAI and GPT-4,” Marc Rotenberg, president and general counsel of the CAIDP, said in a press release. “We are specifically asking the FTC to determine whether the company has complied with the guidance the federal agency has issued.”
Make no mistakes about it: OpenAI has poked a lot of bears here. Government and private bears.
Earlier this month, the U.S. Copyright Office launched a new initiative that examines content generated by AI in "direct response to the recent striking advances in generative AI technologies and their rapidly growing use by individuals and businesses." No mention of OpenAI, but we all know.
Late in February, the FTC sent marketers and advertisers a clear message about what they say about artificial intelligence and their products with some stern guidance. And this month, the FTC put out statements on chatbots, deep fakes and voice clones.
The message? All eyes are on AI, particularly the company OpenAI that won over Microsoft. The government is certainly taking notice; in fact, it's even writing legislation created by ChatGPT.
Related Article: ChatGPT Suffers First Data Breach, Exposes Personal Information
CAIDP: OpenAI Is Not Transparent, Fair or Empirically Sound
The CAIDP is not impressed, though. It essentially says OpenAI needs to better follow FTC business practices. The FTC has said that AI companies should “ensure that data and models are empirically sound," CAIDP officials noted.
What specifically has OpenAI done wrong with its release of ChatGPT-4? The FTC says the use of AI should be "transparent, explainable, fair, and empirically sound while fostering accountability. ... OpenAI's product GPT-4 satisfies none of these requirements," CAIDP officials wrote in their FTC complaint.
The nonprofit wants the FTC to:
- Open an investigation into OpenAI
- Enjoin further releases of GPT-4
- Ensure the establishment of necessary guardrails to protect consumers, businesses and the commercial marketplace.
“We are at a critical moment in the evolution of AI products," Merve Hickok, chair and research director of CAIDP, said in a statement. "We recognize the opportunities and we support research. But without the necessary safeguards established to limit bias and deception, there is a serious risk to businesses, consumers, and public safety. The FTC is uniquely positioned to address this challenge.”
The CAIDP is also making its mission known overseas.
Ursula Pachl, CAIDP board member and deputy director general of the European Consumer Organization (BEUC), said, “CAIDP has raised critical issues about the impact of GPT-4 on consumer protection, data protection and privacy, and public safety. This complaint should serve as a wake-up call in the EU. The EU’s proposed AI Act is currently under discussion but it will only fully apply in 3 to 4 years. We call on EU authorities to launch an investigation now into the risks of ChatGPT and similar chatbots for European consumers.”
Related Article: Generative AI: Opportunities and Challenges for Marketing
OpenAI: 'Careful Iteration' Is Paramount for AI Development
OpenAI hasn't responded yet to the CAIDP's complaint to the FTC. In a Feb. 24 post before its deployment of GPT-4, OpenAI CEO and co-founder Sam Altman, discussing artificial general intelligence (AGI), said a gradual transition is the best way to bring AGI into existence.
"A gradual transition gives people, policymakers, and institutions time to understand what’s happening, personally experience the benefits and downsides of these systems, adapt our economy, and to put regulation in place," Altman wrote. "It also allows for society and AI to co-evolve, and for people collectively to figure out what they want while the stakes are relatively low. We currently believe the best way to successfully navigate AI deployment challenges is with a tight feedback loop of rapid learning and careful iteration."
things we need for a good AGI future:— Sam Altman (@sama) March 30, 2023
1) the technical ability to align a superintelligence
2) sufficient coordination among most of the leading AGI efforts
3) an effective global regulatory framework including democratic governance
The more usage of AI in the world will lead to good, he added, and that "democratized access will also lead to more and better research, decentralized power, more benefits, and a broader set of people contributing new ideas. ... As our systems get closer to AGI, we are becoming increasingly cautious with the creation and deployment of our models. Our decisions will require much more caution than society usually applies to new technologies, and more caution than many users would like."
The balance between the upsides and downsides of deployments — empowering malicious actors, creating social and economic disruptions, and accelerating an unsafe race — could shift, according to Altman, "in which case we would significantly change our plans around continuous deployment."
Have a tip to share with our editorial team? Drop us a line: