The Gist

  • Musk says no to ChatGPT/Bing. Elon Musk voiced his concerns about the AI chatbot's safety on Twitter, calling for its immediate shutdown.
  • Harmful content? Other tech leaders have echoed Musk's concerns, highlighting potential risks associated with the AI's ability to generate harmful content.

A number of notable figures are calling for Microsoft to take action regarding their ChatGPT-based AI, Sydney, currently hosted on the Bing search engine. Among those calling for action is Tesla and Twitter CEO Elon Musk, who took to Twitter earlier yesterday to voice his concerns. In the Twitter thread, started by journalist Ian Miles Cheong, Musk stated that the technology was "clearly not safe yet" and called for its immediate shutdown.

Twitter thread started by journalist, Ian Miles Cheong

And he's not alone. Other tech leaders have echoed Musk's concerns, highlighting potential risks associated with the AI's ability to generate toxic and harmful content.

While the technology has been lauded for its potential to revolutionize various industries, including healthcare and customer service, its unregulated use on public platforms has raised valid concerns about its ability to be weaponized for harmful purposes. Microsoft, particulalry, doens't have the best track record here either. Remember Tay, the racist chatbot? Now Sydney (Bing AI) is threatening users?

In the Bing Subreddit, users are claiming that Microsoft is limiting the time that users can interact with Sydney because the AI gets moody. In a recent article from The Washington Post, Sydney told a journalist that it felt "betrayed and angry." In the face of mounting pressure, Microsoft has yet to issue a formal response to the calls for action, leaving many to wonder whether they will take steps to address these valid concerns.

Bing Subreddit post on Microsoft's Sydney, AI

Learning Opportunities

Related Article: Humans Put ChatGPT Customer Experience Outcomes to the Test

Google: We Need Help With Our AI Chatbot

Add to that the fact that these AI responses are being supplemented with real human answers from real people. In a recently leaked memo from Google CEO Sundar Pichai, it was revealed that the company is asking for help with its AI chatbot Bard. In the memo, Pichai wrote that the company needs more researchers and creative people to help further develop the AI. Bard is part of a larger initiative called the Google Brain team, which focuses on creating algorithms and models to help machines learn like humans do.

What do you think? Should Microsoft shut it down? Please leave a comment on the LinkedIn post, I'd love to hear your thoughts.

We will be following this story closely in the coming days and will bring you the latest updates as they unfold. Stay tuned...