The Gist

  • A stern warning from AI 'godfather.' Geoffrey Hinton says AI is a threat to our species. 
  • Meta's message to ChatGPT imposters: "I'm sorry, I can't let you do that, Dave."
  • Is AI psychic? Maybe not, but researchers say it can read your mind.

Widely known as the "godfather of AI," Geoffrey Hinton recently quit his post at Google so he could be free to talk about what he views as the significant risks of AI.

As a computer scientist and a pioneering figure in artificial intelligence, Hinton is the recipient of the ACM A.M. Turing Award (otherwise known as the “Nobel Prize” of computing), and along with two of his graduate students at the University of Toronto, in 2012, he developed technology that has become the intellectual foundation for AI systems.

But he now he says his regrets and concerns over artificial intelligence have inspired him to speak out. He did not join the more than 1,000 tech leaders who signed an open letter calling for a six-month stoppage on new AI development because he “did not want to publicly criticize Google or other companies until he had quit his job." Now that he has, he’s speaking up, warning that if we don't take the necessary precautions, we could end up with machines that are more intelligent than humans and could pose an existential threat to our species.

In an interview, Hinton told the New York Times, “The idea that this stuff could actually get smarter than people — a few people believed that ... But most people thought it was way off. And I thought it was way off. I thought it was 30 to 50 years or even longer away. Obviously, I no longer think that.”

(Check out his video later in the article).

In other AI news...

Meta Warns of ChatGPT Imposters

Meta, formerly known as Facebook, released its Q1 security report and offered an update on its work to enhance security measures across its platforms and prevent the spread of misinformation, including covert influence operations, cyber espionage and malware campaigns. According to Chief Information Security Officer Guy Rosen, the company took action against nine “adversarial networks around the world for engaging in covert influence operations and cyber espionage” including approximately “10 malware families posing as ChatGPT and similar tools to compromise accounts across the internet.”

According to Rosen, malware operators, similar to spammers, are highly aware of current trends and tend to exploit popular topics and hot-button issues to attract people's attention, adding that “The latest wave of malware campaigns have taken notice of generative AI technology that’s captured people’s imagination and excitement.”

Related Article: ChatGPT Adds Data Management; Google, Meta CEOs Tout AI Progress

In Other Meta News: Meta Unveils Latest in AI

On May 18, Meta's Engineering and Infrastructure teams will host AI Infra @Scale, a one-day virtual event, where speakers will unveil the latest AI infrastructure investments and innovations powering Meta's products and services. Attendees will learn how Meta is building and scaling technologies for the next generation of AI infrastructure. The event will include sessions on the future of AI infrastructure, the challenges that await, and what’s next in generative data design and generative AI-assisted code authoring.

Researchers Create AI Tech that Can Read People’s Minds

According to a new study, researchers have created an AI transformer, similar to ChatGPT, that can convert a person's thoughts into text. This AI transformer works as a language decoder by reconstructing continuous language from semantic representations recorded through fMRI (functional magnetic resonance imaging).

The language decoder can generate word sequences from novel brain recordings that can recover the meaning of perceived speech, imagined speech and even silent videos, demonstrating that a single decoder can be used for a range of tasks.

Learning Opportunities

While the research has implications for a wide range of applications, it has also raised concerns about privacy and the potential for misuse. The researchers have emphasized that their work is currently in the early stages and that there are significant technical and ethical challenges that need to be addressed before it is used.

Related Article: Salesforce Integrates Einstein With Flow, Musk Wants TruthGPT, More AI News

White House to Host Meeting on AI Safeguards

On May 10, representatives from Alphabet, Microsoft, and other major tech companies are set to attend a White House meeting to discuss AI safeguards and address how AI can be used ethically, with the goal of creating a set of guidelines and principles for the industry to follow.

Bloomberg reported the gathering will include officials from both the White House and the National Science Foundation and will feature discussions on topics such as AI's potential for bias and discrimination, as well as the possibility of accidental or malicious misuse. Attendees will also explore ways to ensure transparency in AI systems and promote collaboration between government agencies and private industry.

AI Video of the Week: Could AI 'Kill' And 'Manipulate' Humans?

Geoffrey Hinton, the "godfather" of AI, explains his darkest AI fears:

AI Tweet of the Week: Now Hiring AI

In a post to the company’s blog, Drew Houston, CEO at Dropbox, announced his company’s decision to reduce its global workforce by about 16%, affecting approximately 500 employees — but it’s the reason why that really caught everyone’s attention.

According to Houston, “The AI era of computing has finally arrived," adding that, “AI will give us new superpowers and completely transform knowledge work” — a statement that leads many to believe the layoffs are a result of the fact that AI will could take over much of the work previously done by human workers. 

fa-regular fa-lightbulb Have a tip to share with our editorial team? Drop us a line: