A recent report from Blue Bell, Pa.-based information management specialist Unisys revealed some interesting insights into how workers view and understand artificial intelligence (AI). When asked what emerging technology had the highest potential to transform their workplace environment in the next five years, 31 percent identified the Internet of Things (IoT), while 27 percent cited AI. However, only 20 percent said they really understood AI, despite its current buzzword status. 

Given the importance of AI, we dug a little bit deeper to identify the issue. Unsurprisingly, what we found was the problem for many was not with the technology — although there are problems there too — but with a lack of understanding of what AI is, what it can do and what its limitations are.

Related Article: Understanding the Role of Artificial Intelligence in the Digital Workplace

Understanding What AI Is

“The definition of AI is vague. There is no standard or technology that defines AI. For example, some call simple rule-based (pre-programmed) chatbots an AI. But we may assume that general conclusion defines today's AI as any machine learning-based technology, meaning some process that is capable to learn from data without manual programming of algorithm by humans,” Kirill Rebrov CEO and CTO of Erie, Pa.-headquartered Demografy, provider of an AI-based customer demographics predictor platform.

The purpose any given process serves takes many forms: chatbots, voice assistants, stock market prediction, data analytics, self-driving cars, clinical decision support systems, and more. What they all share in common is the AI they are based on doesn't meet the criteria for Artificial General Intelligence (AGI), which is universal and can perform any intellectual task that a human can. Today's AI is specialized and trained to perform only specific intellectual tasks.

This puts an extreme emphasis on the data used to train AI. Selecting the data usually involves two steps:

  1. In most cases this data is preprepared manually or semi-manually during so called feature engineering, a process of selection and extracting data points, which are relevant for learning, from raw data.
  2. This is a domain-specific data. If we talk about stock market prediction, this is stocks. If we talk about clinical decision support systems, this is medical records.

Related Article: The Challenges Facing Today's Artificial Intelligence Strategies

Why Data Is Limiting

This critical reliance on data causes another challenge: Any bias in the data can lead to unpredictable and poor performance. So, data should be well balanced, and representative of what the AI is going to predict. Rebrov cited the example of Microsoft, which introduced chatbot Tay in 2016. He said Tay learned by chatting with people on Twitter. The project became notorious because the chatbot quickly learned to make racist and sexist insults. Another example is Norman AI, developed by MIT. In this experiment, the AI was purposely trained on images of violence and death, earning the questionable honor of being called "the world's first psychopathic AI."

“So, the key problem is data. Not only quality but also origin of the data and bias in it, as well as narrowness of collected domain-specific data and features used for learning," said Rebrov. This leads to two key limitations:

  1. Unpredictable results if data is biased. This is a serious problem because it still has a human in the loop.
  2. Narrow specialization of AI solutions based on mostly manually prepared training data for specific problem domains.

Michael Johnston is director of research and innovation at Interactions, which makes AI-powered virtual assistants for customer service. He said that despite how far the artificial intelligence industry has come, AI is still in its early stages, as it is still unable to match the creativity, variability and scope of natural human-human conversation. “Most developers of conversational systems attempt to address this by severely limiting the tasks that the AI can handle, but this often results in an underwhelming and frustrating solution,” he said.

Related Article: We Won't Feel AI's Full Impact on the Workplace for Years to Come

Learning Opportunities

AI and the Human Factor

To solve these limitations, AI requires input — human input — for it to learn and grow. That's why the most effective AI technologies seamlessly blend human and artificial intelligence. Human analysts assist the AI when it is uncertain and in doing so, constantly improve the performance of machine learning models and make the technology more effective overall. “In turn, the AI becomes both smarter and broader in application — meaning it not only gets better at the tasks it is handling now but is also capable of taking on more over time,” Johnston added.

Take medicine as an example of an industry looking to AI to carry out many repetitive tasks. Austin Jones is founder and CEO of Pensacola, Fla.-based Unity Health Score, provider of a platform that uses incentives to encourage better interactions between patients, doctors, research and insurance. He said AI is limited in perspective. “I'll compare it to government. AI just like government works on paper: you can write any political ideology and on paper it works. Its application is the defining factor. Now instead of economics, sociology, and culture impacting AI, as it does government, data impacts AI,” he said.

Essentially AI is limited by the quality and structure of the data it's operating from. It literally has limitless possibilities, but those possibilities are defined by the data. “We could see a world where AI replaced doctors, but I don't see that coming anytime soon, at least not in our lifetime. Although we will see doctors begin to use AI as a diagnostic. All in all, AI is only as limited as its data,” he added.

The problem is AI cannot do what humans can do even if it can be trained to do a wide range of repetitive tasks. Companies should use AI for highly repetitive tasks that can be scaled and solve problems that can lead to significant improvement in processes. AI should also solve problems that have real-world impact, said Aman Naimat, SVP of Technology and Engineering at San Francisco-headquartered account based marketing company Demandbase. What it shouldn’t do is solve problems that are outside of what humans are capable of, problems that have never been solved, or problems that humans are really good at solving. Some areas that have high potential for AI opportunities include monitoring illnesses, drug discovery, adaptive road traffic control, autonomous vehicles, real-time personalized promotions, inventory optimization in manufacturing and more.

“We find that trust is one of the biggest hindrances to adoption of AI among businesses. Humans will only take action if they trust the AI, and they will only trust the AI if the AI is transparent,” Naimat said.

Maps are a good example of this. A map product like Garmin that only shows you the “A” route, the best route, delivers the result without showing you how it came up with the recommendation. But a map product like Google Maps might show you all routes, basically the “A” route and two options, so you can choose. And it might also tell you why it chose the best route over the others. It’s easier for people to trust a product that not only gives us options, but also gives some insight into why those options are ranked the way they are.

Related Article: 6 Ways AI Is Improving the Digital Workplace

Always Back to the Data

Chris Hamoen, co-founder, and CEO of CRM start-up Account HQ shared one final thought worth keeping in mind in respect of AI and processes. At the moment most of AI use in the enterprise is rebranded workflow processes (for example, lead scoring). AI is set for prime time. Yet, “there is a dirty little secret, particularly in CRM, which is that the data is 75 percent bad. Eighty percent of opportunities have no activities recorded against them at all. Without this data, companies and vendors are learning that they must address this massive data problem first before they can further adopt AI and machine learning."