broken robot with hands on the white wooden background
PHOTO: Shutterstock

The recently published survey of Chief Information Officers (CIO) and their spending priorities over the next year found that despite all the hype, artificial intelligence (AI) is still only in its early stage of development. The 2018 CIO Agenda Survey showed that only four percent of CIO’s have actually implemented AI technologies, even if a further 46 percent have said they have plans to do so. From this, we can take it that AI, despite the hype, is still very much an emerging technology with early adopters still struggling with a wide number of issues that have been well highlighted previously. 

What is clear is that AI has huge potential. "Despite huge levels of interest in AI technologies, current implementations remain at quite low levels. However, there is potential for strong growth as CIOs begin piloting AI programs through a combination of buy, build and outsource efforts," said Whit Andrews, research vice president and analyst at Gartner, in a recent statement He also warns enterprises that they should expect the development of their AI projects to be slow and that initial experiments will, at best, produce blueprints for the future. 

While he recommends that initial investments should only be in the thousands, or tens of thousands of dollars, he shies away from recommending that enterprises should think twice before deploying artificial intelligence. And while AI is emerging as a product differentiator, there are a number of reasons why enterprise managers need to sit down with their CIO’s, or IT departments to discuss whether AI will really meet business objectives and whether they should follow prevailing trends and invest in AI. Here are nine reasons why businesses might decide to hold off on AI implementation.

Related Article: 8 Examples of Artificial Intelligence (AI) in the Workplace

1. Lack Of Definition and Strategic Business Objectives

Everyone’s doing it’ is never a reason to follow a trend, he added. Go through the requisite line of questioning in your industry, and know how your competitors are positioning themselves. How does AI really apply to your industry? What are your competitors doing and not doing yet? What have they failed at and succeeded at regarding AI? Finally, ask yourself: How can AI push my business or service into the future and differentiate me from my competitors?

Antonis Papatsaras is CTO at Chicago-based SpringCM. He believes that many enterprises are moving into AI without identifying what problems they are trying to resolve. He says a major pitfall of AI adoption is that many organizations are rushing to automate tasks prior to thinking through the ramifications and defining the very task that is being automated. However, he also believes that AI is just about inevitable. “While I believe all businesses soon eventually come to use AI in some aspect, in order for an algorithm to be trained to 'think,' there needs to be a wide data set of situations and results for the algorithm to take into account. For the decision-making process to become successful, there needs to be well-defined outputs and results or the margin of error grows,” he said.

Chris Belli is VP of Marketing and Business Development at Indianapolis-based Studio Science, said that before deploying AI, ask whether adopting AI aligns with your businesses growth strategy. Business leaders need to ask how it adds value whether through improving customer experience, differentiating an organization from competitors or enabling better long-term, data-driven decisions. “The expectations around AI must be clear,” he said. “Think about which new markets AI will help you gain. How much money is in those markets, and would AI solve problems that currently are intractable?  

Finally, realize that all new technology comes at a cost. What do you expect your ROI to be, and how long will it take to see a return?

Related Article: Exploring the Ethical and Social Implications of AI

2. Not Fully Understanding the Risks Involved

He also thinks that AI should not be in applied to a process until the risk and error is fully understood. When there’s a critical safety aspect and when human lives are at stake, for example with autonomous cars, the technology needs to be fully developed before it can be put to use. As a result, it is important that organizations start using AI in well-defined processes that are highly repetitive, like contract or document management processes.

3. Humans Want to Talk To Humans

Voxpro develops customer experience, technical support and sales operations solutions and is based in Cork, Ireland. Their CEO, Dan Kiley, believes that for those enterprises looking to develop top customer experiences, AI may not be the best option for their business. He points out that while Bots, AI and Machine Learning complement many aspects of the customer experience – enriching insights, improving processes, and increasing personalization, to name but a few – humans will still primarily want to engage with humans in many situations. “Psychological studies have shown that we have a preference for communicating with others person-to-person despite all of the communication technology at our fingertips,” he said.

He adds that in today’s "Attention Economy’" where people’s attention is treated as a scarce resource, customers want their issues solved as quickly and comprehensively as possible. While Bots can handle the basic issues by themselves, more complex ones must be solved by humans.

4. AI Can Lead to a Lack of Emotional Empathy

For Keri Lindenmuth, Marketing Manager with the Whitehall, Penns.-based website developer firm, the Kyle David Group, the loss of human contact also results in a lack of emotional empathy, one of the key qualities needed in good sales people. She argues that one of the side effects of AI is a gradual loss of human interactions and emotions. For example, many businesses are turning to AI chatbots when it comes to online customer service communication. While this is efficient in that it frees up employees and gives them more time to work on other tasks, these bots can't understand emotions or show sympathy in the same way a human being can.

She points out that understanding customer emotions is a leading facet of providing a better customer experience. “If this is lost, what will happen to the customer experience businesses can provide? she asked.

5. Technology Isn't Mature Enough

Harrison Brady is a communications specialist with Norwalk, Conn.-based Frontier Communications. He said that becoming too obsessed with AI is dangerous. He pointed to the fact that in Las Vegas, America’s first self-driving bus crashed within two hours of its launch. The damage was minor and nobody was hurt, but he says there are lessons to be learned. He argues that we are so eager to press forward in advancing AI technology that we are willing to put our lives at risk with products that simply aren’t ready to hit the market yet. “If we don’t take the appropriate precautions now, AI has the power to ruin, or in the case of a self-driving vehicle, end our lives. Elon Musk warned that AI is a greater danger than North Korea, and Stephen Hawking said that AI could be the “worst event” in the history of our civilization. And yet, we step onto the self-driving bus and enthusiastically wait to see what happens,” he said.

6. AI Can't Make Moral Choices

Pavel Cherkashin is the managing partner of San Francisco-based GVA Capital, which finances startups and provides the money to enable them to grow. He says AI should be avoided when decision making involves human morality and responsibility for human lives in an uncertain situation. For example, a military drone should not be able to make a decision to shoot. It should only do it after receiving the command from a human, as AI is not yet able to consider all the necessary factors to make this kind of decisions and will not be able  to do it at least for a couple of decades.

7. You Have Inadequate Data to Support AI Decision Making

AI should not be used in situations, for which the data volume is not big enough to support decision making. In this case humans should be involved. For example, a vehicle management system should not be making decisions, if it suddenly starts snowing in Palo Alto. Car systems are not “trained” to handle such an extraordinary situation, there is no data they can rely on when making a decision, so they simply don’t know what to do.