man playing chess on an enormous chess board
PHOTO: Zoe Holling

Most organizations fail at artificial intelligence (AI) implementations in the workplace or elsewhere because they lack the skills, staff and resources, as well as, potentially unrealistic expectations, according to an IDC survey. It’s not as if they’re not trying. IDC found more than 60% of organizations reported changes in their business model in association with their AI adoption, and nearly half have a formalized framework to encourage considerations of ethical use, potential bias risks and trust implications. Further, a quarter of organizations have established a senior management position to make sure they get things right.

“AI can create better outcomes to highly variable problems by constantly changing the rules,” said Mike Orr, COO of Grapevine6. “The challenge for enterprises is, ‘How do we know it’s working?’ This is an important question when even a small failure can mean something as serious as introducing institutional bias or regulatory violations.”

Organizations need to have a good AI defense strategy, regardless of where the implementation of AI comes, be it marketing, customer experience, employee experience or digital workplace. In other words, brace for the things that can go wrong in your implementation planning.

So, How do you get that done? Here are a few quick tips if you’re planning an AI implementation strategy for 2020. 

Know Your Vendor

The first step may be obvious: work with solid vendors. How can you ensure that? Work with vendors with expertise and experience, Orr said. “The more clients they work with the better because you multiply the number of people thinking about risk and finding it,” he said.

When it comes to using AI to improve your customer experience, for instance, ask your technology vendors where they think things can go wrong and what they have done to prevent it, according to Wayne Coburn, principal product manager at Iterable. “Every AI carries risk,” Coburn said, “and if your vendor can't describe what their risks are and what their mitigation strategy is, then maybe it's time to find a new vendor.” 

Related Article: AI Bias: When Algorithms Go Bad

Build AI Team That Audits Program

Do your homework, Orr added. Put together a cross-functional risk management team that creates a test plan and then tests extensively with real data, he said. “Trust, but verify,” Orr said. “Commit resources to periodically audit the outcomes, which may include random and directed sampling and adding monitors that live outside of the AI. Case in point: ask candidates or customers if they felt the outcomes were fair.” 

Ensure Models Are Aligned to Company Objectives

Measurement is important because AI will have the same failings as people and sometimes over-respond to incentives, according to Orr. Periodically step back and think holistically about the outcomes to ensure sure the AI models are aligned to a company’s objectives. “In many ways this is similar to managing people in any organization,” Orr said. “You need to align on objectives, make sure incentives lead to desired outcomes and provide continuous feedback to your employees.”

Related Article: Lessons Learned from a Chatbot Failure

Understand All AI Decisions

When adopting AI technologies, it is critical that an organization is able to understand why every decision made by the AI was reached, according to Laurence Hart, director at TeraThink. “A black-box approach is not acceptable,” Hart said. “For example, having an AI automatically approve loans, and thus denying others, is OK if you can point confidently at the factors that led to that decision. This allows organizations to defend their actions and understand when, and how, the AI makes mistakes. Remember, your AI will make mistakes. The questions are how large, how soon and how will you respond to those mistakes.”

Address Ethical Questions Immediately

RJ Talyor, CEO and founder of Pattern89, said one of the quickest ways to let your guard down around AI implementations is to not prepare for the ethical questions that will inevitably arise. Start addressing the ethical questions right off the bat, and how you will manage that process, Talyor told CMSWire.

Related Article: Is it Time for Your Organization to Form an AI Ethics Committee?

Building a Diverse Team

When planning for AI, you need a diverse team to implement and run it, Hart said. While organizations run better with diverse teams, he added, it is even more critical for AI projects. “Bias creeps into every action a person takes,” Hart said. “Diversity helps to remove any negative biases, unintentional or not, from the AI process. Organizations that don't focus on diversity are risking codifying negative biases into everything the organization does in the future.”