Andrew Moore, vice president at Google Cloud, really stirred up a hornets’ nest when he declared at a Google event that artificial intelligence (AI) is stupid. "It is really good at doing certain things which our brains can't handle, but it's not something we could press to do general-purpose reasoning involving things like analogies or creative thinking or jumping outside the box,” he said. The statement was widely tweeted.

Related Article:  AI vs. Algorithms: What's the Difference?

Is AI Stupid?

What followed was a virtual gasp of astonishment with many tech commentators weighing in to clarify and explain exactly what he meant. In retrospect, four months later, what he said is not such a big deal. It does, however, pose the question as to what AI is good for and how far has it come. So is AI stupid?

“It’s a fascinating question, but Moore is actually being relatively accurate when he says that AI is stupid,” Jeremy Goldman, founder of the Firebrand Group, which he later sold. Humans, he explained, are entirely behind AI code, and as such we are only beginning to program AI to:

  1. Solve our current business pain points, which often involve repetitive tasks at high volumes.
  2. Do things that are already relatively easy to do in human terms.

Other processes like “creative thinking” or “outside the box thinking” are still impossible to program as they are somewhat subjective and hard to explain. When we can’t explain something clearly, he said, we can’t code AI properly to present only “good” outside the box ideas, as opposed to “bad” ones. “No one wants to go through an AI program that develops 100 synopses for potential new TV shows, only to find 93 of them are “false positives” that are essentially useless. As a result, using AI for certain tasks at the moment actually slows us down, rather than speeds us humans up. That’s something that would certainly qualify AI, in many eyes, as stupid,” he said.

Related Article: 7 Ways Artificial Intelligence is Reinventing Human Resources

Overreaching Expectations

Michael Berthold, founder and CEO of KNIME, an open source data analytics company, pointed out that the pendulum on what AI can do is currently swinging back just like 20 years ago when neural networks couldn't fulfill the expectations people had then either. The problem is that now, even more so than back in the 90s, the terminology and the expectations it raises in people's minds.

In contrast, systems built using methods from the AI research area are extremely good at performing specific tasks. In the end, disappointing as it may seem, this really is all still machine learning.

“With more compute power and more data, we can simply build better and faster systems, but fundamentally, they still do the same thing: They find and learn patterns, but they don't end up being new things that think and ponder like a human being. The problem is that as soon as those tasks look human — recognizing faces or spoken language — they are quickly labeled as being intelligent,” he said.

AI, he added, offers methods that enable us to build systems that learn specific human-like tasks from observations. If marketing had been a bit more modest, they would have stuck to the much more descriptive term machine learning. They wouldn't write about deep neural networks that are creative or dream, but would say what they do: learn and extract patterns from data.

There has been a lot of work done already into how to get computers to mathematically learn about the human reasoning process. However this is a very hard process because the link between mathematics and common sense hasn’t been quite established yet. AI is as smart as an extremely expensive calculator, but the common sense knowledge that we have as human beings is the ability to solve problems such as figuring out why a light won’t turn on — is still largely missing in AI because it is very difficult to encode basic human knowledge into a computer.

Pushing AI Understanding

Multiple markets are now starting to develop a better understanding of what AI actually is, and how it can or can’t yet be used in the business world, said Itzik Spitzen, CTO and co-founder of LeasePilot. Instead of just automating a process, companies should be sure that context and nuance are taken into account, which is something AI still hasn't fully mastered.

Learning Opportunities

Some AI implementations are also at a stop gap. The information available to organizations today is disorganized, incomplete and ultimately unusable, often from the process involving non-digitized data elements. In those instances, the better option is to start fresh by cleaning and organizing the data. That being said, this is just another proof point of why operating in a digital and automated world must become the universal standard. 

Artificial Intelligence's Limitations

Here we look at three of AI's major limitations.

1. Narrow AI - Briana Brownell, founder and CEO, a technology company that creates and deploys AI co-workers, pointed out that right now most applications of AI are very, very narrow. In the case of AI related to image recognition we usually need a huge number of examples to accurately learn something relatively simple: say to determine whether a photo is of a cat or a dog. “When we have an AI who excels at this task, turning it into a very close task, to say whether the photo is a jaguar or a wolf, usually doesn't transfer very well, so you could definitely say that this AI is stupid," she said.

She said that there is currently a lot of interest in methods that allow us to retain some of the knowledge gained in close tasks, and I think that there is a lot of promise here. Building up this breadth of knowledge is one of the biggest challenges of AI right now.

2. Data Hungry AI - AI is inherently data hungry and can be considered dumb if it has bad “food” — data, said Bruce Orcutt, SVP of product marketing at Milpitas, ABBYY. To enable AI, software robots need to have a set of consumable cognitive skills to become smarter. Robots with advanced cognitive skills use technology like optical character recognition (OCR), machine learning and natural language processing (NLP) in combination with robotic process automation (RPA) to enable understanding and liberate meaning from data trapped in documents, and automate tasks involving decisions, judgment or problem solving, essentially mimicking human intelligence.”

3. Emotional Intelligence - Tibor Vass, global director of solution strategy and business automation at Genesy, said that while AI isn’t perfect, it’s getting smarter every day. While we have reached a point where computing capacity and speed is no longer a technical limitation, we are still working to make its communication more human-like.

Effective computing is part of a broader shift in NLP to develop an AI that can understand what humans mean to say and the emotion behind their words. Potential benefits in the contact center range from applications detecting the customer’s intent in real-time, recommending next best actions or offers, and the actual emotional and contextual state of the conversational partner.

“The problem is that we have yet to see AI be able to effectively quantify human feelings and moods into unique data points or profiles. This lack of emotional intelligence is why AI-powered solutions can prove frustrating at times, but that will begin to change within the next few years.”