The Gist
- Inability to grasp nonverbal world. AI chatbots are impressive from a product standpoint, but their inability to grasp the nonverbal world limits their ability to understand, predict and plan.
- Text-only training. Chatbots are currently trained only on text, which represents only a tiny portion of human knowledge, hindering their potential for human-level intelligence.
AI chatbots are getting so good people are starting to see them as human. Several users have recently called the bots their best friends, others have professed their love and a Google engineer even helped one hire a lawyer. From a product standpoint, these bots are extraordinary. But from a research perspective, the people dreaming about AI reaching human-level intelligence are due for a reality check.
Limitation of Chatbots: Failing to Grasp the Nonverbal World
Chatbots today are trained only on text, a debilitating limitation. Ingesting mountains of the written word can produce jaw-dropping results — like rewriting Eminem in Shakespearian style — but it prevents the perception of the nonverbal world. Much of human intelligence isn’t marked down. We pick up our innate understanding of physics, craft and emotion by living, not by reading. And without written material on these topics to train on, AI comes up short.
“The understanding these current systems have of the underlying reality that language expresses is extremely shallow,” said Yann LeCun, Meta’s chief AI scientist and a professor of computer science at New York University. “It’s not a particularly big step towards human-level intelligence.”
Related Article: ChatGPT: What You Need to Know
ChatGPT's Limited Understanding Exposed by Simple Physics Test
Holding up a sheet of paper, LeCun demonstrated ChatGPT’s limited understanding of the world in a recent Big Technology Podcast episode. The bot, he promised, would not know what would happen if he let go of the paper with one hand. Upon consultation, ChatGPT said the paper would “tilt or rotate in the direction of the hand that is no longer holding it.” For a moment — given its presentation and confidence — the answer seemed plausible. But the bot was dead wrong.
Learning Opportunities
LeCun’s paper moved toward the hand still holding it, something humans know instinctually. ChatGPT, however, blanked out because people rarely describe the physics of letting go of a paper in text (well, perhaps until now).
Related Article: OpenAI's New ChatGPT Might Be the First Good Chatbot
Text-Based Knowledge Hinders AI's Ability to Understand, Predict
“I can come up with a huge stack of similar situations, each one of them will not have been described in any text,” LeCun said. “So then the question you want to ask is, 'How much of human knowledge is present and described in text?' And my answer to this is a tiny portion. Most of human knowledge is not actually language-related.”
Without an innate understanding of the world, AI can’t predict. And without prediction, it can’t plan. “Prediction is the essence of intelligence,” said LeCun. This explains, at least in part, why self-driving cars are still bumbling through a world they don’t completely understand. And why chatbot intelligence remains limited — if still powerful — despite the anthropomorphizing.