GPT-3 has been dominating recent tech headlines, as well it should. GPT-3 uses deep learning to produce human-like text and represents a massive achievement for OpenAI. Unfortunately, the abilities of GPT-3 have misled people to conclude it is a major step on the road to Artificial General Intelligence (AGI). In no uncertain terms, it is not.
AGI is the hypothetical concept of computers having the capacity to understand and learn the same intellectual tasks that humans can. Presented as having learned 175 billion parameters, GPT-3 parses its input looking for similarities with previously learned information and creates relevant output. In a widely described demonstration, GPT-3 generated an entire article given a few sentencesto get it started.
Saying that GPT-3 is not AGI in no way diminishes the accomplishment of GPT-3 or reduces its usefulness or applicability. But the bottom line: GPT-3, while impressive, still lacks the capabilities that AGI demands.
The Road to Artificial General Intelligence Isn't Through GPT-3
MIT's Lex Fridman released a video, GPT-3 vs. Human Brain, which provides a case in point. In it, he compares the cost and information content of the GPT-3 system and the human brain. Dr. Fridman’s YouTube channel is widely respected and viewed. While his video is not wrong, it implies that by extending GPT-3 with more powerful hardware, true intelligence might emerge.
True computer intelligence could well be available by the end of the decade at a cost well within the budgets currently being expended on AI, but it will not come about by extending the current GPT-3 model.
If we had a weather simulation program with 175 billion parameters and imply that with 500 times the budget it could handle as many parameters as the human brain, it would be absurd to assume it would be thinking. It would still be a weather simulation program. GPT-3 is different in that it works with language, which we think of as being within the specific purview of the human mind. Coverage of GPT-3 tends to downplay the fact that GPT-3 works in many ways unrelated to the way the brain works.
GPT-3 has been described as the world’s most powerful auto-complete system. It’s so powerful that given just a few words, it can complete a paragraph (or more). Many people can do that too, but they do it from a fundamental level of understanding where GPT-3 works on the likelihoods that certain words are related to others and are likely to follow others to build phrases related to the initial input. The technology of GPT-3 can also be harnessed for other human-like abilities. For example, given a description of a simple app, it can generate the code. Asked a question, it can provide a specific link to the answer.
For all of its capabilities, though, GPT-3 still lacks many of the capabilities of an average three-year-old. It doesn’t understand that all physical objects exist in a three-dimensional space and are impacted by actions and elemental physics. Any child playing with blocks understands these concepts. GPT-3, on the other hand, has no concept of causality and the passage of time — any child knows you have to stack the blocks before they can be knocked down. We also are keenly aware that the words which represent something are not the thing itself. There’s a level of abstraction beneath the words.
Related Article: AI Transparency and the Emperor's New Clothes
AI Needs More Tactile Experience
So what will it take to get to AGI? How will we give computers an understanding of time and space? We humans are great at merging information from multiple senses. A child will use all its senses to learn about blocks. The child learns about time by experiencing it, by interacting with toys and the world.
In the same way, AGI will need a robotic body to learn similar things, at least at the outset. The computers don’t need to reside within the robot, but can connect remotely because electronic signals are vastly faster than those in our nervous systems. But the robot provides the ability to learn first-hand about stacking blocks, moving objects, performing sequences of actions over time, and learning from the consequences of those actions. With vision, touch, manipulators and more, AGI can learn to understand in ways which are simply impossible for a text-based system. Once AGI has gained this understanding, the robot may no longer be necessary.
Conquering fundamental understanding then is the key to AGI. Adding GPT-3 on top of true understanding would yield a truly awesome result.
Related Article: Is There a Clear Path to General AI?
How Do We Reach 'Understanding'?
Unlike Dr. Fridman’s calculation, we can’t quantify the amount of data it might take to represent understanding. We can only consider the human brain and speculate that some reasonable percentage of it must pertain to understanding. We humans interpret everything in the context of everything we have already learned. That means that as adults, we interpret everything within the context of the true understanding we acquired in the first few years of life.
GPT-3’s monumental abilities don’t include the abilities common to any three-year-old, and this is perfectly understandable because the abilities of the three-year-old are not particularly useful or marketable. It will only be when the AI community takes the unprofitable steps to conquer the fundamental basis for intelligence that AGI will be able to emerge. Once it does, it will profoundly impact most facets of AI.