The Gist
- AI: Don't do it. Tech and business leaders call for a six-month pause on training AI systems more powerful than GPT-4.
- We're 'out of control.' The letter expresses concerns about an "out-of-control race" to develop AI systems that cannot be understood, predicted or controlled.
- Better safety protocols. Signees suggest focusing on AI safety and design protocols, improving AI systems and enhancing AI governance during this pause.
You know that whole artificial intelligence innovation thing? Stop it. Now. Or the government should stop you.
At least that's what a group of technology and business leaders — including Elon Musk, Steve Wozniak and tech leaders from Meta, Google and Microsoft — say in a jointly-signed letter hosted by the Future of Life Institute made public this week. Specifically, these leaders "call on all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4."
OpenAI, the creators of ChatGPT-4, probably doesn't mind this directive. After all, AI innovators outside of OpenAI are scrambling to beat ChatGPT right now.
Why the call for this "pause?" AI labs are "locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one — not even their creators — can understand, predict, or reliably control," according to the letter. As of the publication of this article early afternoon ET Wednesday, March 29, there are 1,124 names listed as signees of the "Pause Giant AI Experiments: An Open Letter." (ChatGPT says there are 1,125, but, alas, we regress).
"This pause should be public and verifiable, and include all key actors," the co-signees go on to say. "If such a pause cannot be enacted quickly, governments should step in and institute a moratorium."
Who Signed This 'Pause Giant AI Experiments' Letter?
Among the more than 1,100 signees (so far):
- Yoshua Bengio, founder and scientific director at Mila, Turing Award winner and professor at University of Montreal
- Stuart Russell, Berkeley professor of Computer Science, director of the Center for Intelligent Systems, and co-author of the standard textbook “Artificial Intelligence: A Modern Approach"
- Yuval Noah Harari, author and professor, Hebrew University of Jerusalem.
- Andrew Yang, Forward Party, co-chair, US presidential candidate 2020, NYT bestselling author, Presidential Ambassador of Global Entrepreneurship
- Connor Leahy, CEO, Conjecture
- Jaan Tallinn, co-founder of Skype, Centre for the Study of Existential Risk, Future of Life Institute
- Evan Sharp, co-founder, Pinterest
- Chris Larsen, co-founder, Ripple
- Emad Mostaque, CEO, Stability AI
- Maxim Khesin, Meta, machine learning engineer
- Noam Shazeer, founder of Character.ai, CEO, major contributor to Google’s LaMDA
- Andrew Brassington, Microsoft, senior software engineer
"Contemporary AI systems are now becoming human-competitive at general tasks," according to the letter, "and we must ask ourselves: Should we let machines flood our information channels with propaganda and untruth? Should we automate away all the jobs, including the fulfilling ones? Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete and replace us? Should we risk loss of control of our civilization? AI systems with human-competitive intelligence can pose profound risks to society and humanity, as shown by extensive research and acknowledged by top AI labs."
AI Leaders Are Failing at Responsible Development
Wait, so with all the lightning-speed advancements in AI the last few months — with OpenAI's ChatGPT chatbot leading the way — we need to put on the brakes with AI innovation? How did this happen? Should marketers and customer experience professionals take the advice of the leaders in this joint letter? Or is this more targeted to "giant AI experiments," whatever those are? What qualifies as giant? What's more powerful than ChatGPT-4, and who decides that?
Of course, there have been calls for responsible AI practices that balance the needs of customer experience and marketing efficiency and creativity along with morality, ethics and accuracy. Ethical AI is a term much bandied about, and we're still trying to figure it out despite our artificial intelligence friends veering off the beaten path once in a while.
Learning Opportunities
AI developers aren't doing enough in this arena, though, according to this week's jointly signed letter. Citing Asilomar AI Principles, the letter says, "Advanced AI could represent a profound change in the history of life on Earth, and should be planned for and managed with commensurate care and resources."
They continue by adding, "Unfortunately, this level of planning and management is not happening."
Critical AI development decisions should not be "delegated to unelected tech leaders." Rather, the co-signees say, "powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable. This confidence must be well justified and increase with the magnitude of a system's potential effects."
The letter goes on to say that OpenAI's recent statement regarding artificial general intelligence, states that, "At some point, it may be important to get independent review before starting to train future systems, and for the most advanced efforts to agree to limit the rate of growth of compute used for creating new models."
"We agree," the co-signees say. "That point is now."
Better AI Development Means Shared Protocols, Governance, Policymaking
So what do these co-signees want in a perfect world of AI development?
- Share AI safety and design protocols. Develop and implement a set of shared safety protocols for advanced AI design and development that are rigorously audited and overseen by independent outside experts. "This does not mean a pause on AI development in general, merely a stepping back from the dangerous race to ever-larger unpredictable black-box models with emergent capabilities," they say.
- Make AI smarter and better. Focus AI research and development on making today's powerful, state-of-the-art systems more accurate, safe, interpretable, transparent, robust, aligned, trustworthy and loyal. (And NOT like Tay).
- Beef up AI governance. Work with policymakers to accelerate development of robust AI governance systems, including:
- New and capable regulatory authorities dedicated to AI
- Oversight and tracking of highly capable AI systems and large pools of computational capability
- Provenance and watermarking systems to help distinguish real from synthetic and to track model leaks
- Robust auditing and certification ecosystem
- Liability for AI-caused harm
- Robust public funding for technical AI safety research
- Well-resourced institutions for coping with the dramatic economic and political disruptions (especially to democracy) that AI will cause.
"Humanity can enjoy a flourishing future with AI," the letter concludes. "Having succeeded in creating powerful AI systems, we can now enjoy an "AI summer" in which we reap the rewards, engineer these systems for the clear benefit of all, and give society a chance to adapt. Society has hit pause on other technologies with potentially catastrophic effects on society. We can do so here. Let's enjoy a long AI summer, not rush unprepared into a fall."
Have a tip to share with our editorial team? Drop us a line: