Earlier this year Microsoft announced that it was introducing AI into its ubiquitous spreadsheet application Excel. It is an exciting development for both Microsoft and Excel users, but Diane Robinette, CEO of Incisive Software is wary of the risks involved. “AI is a good thing. But it can’t just be used ‘blindly’, which the average user will do — they will click on that button and be thrilled with the new charts Excel is providing them, without understanding the possible implications.” Among other things, users are relying on Microsoft’s models and assumptions, and many will not bother to wonder if, say, the model is complete or how it handles exceptions, she said.
What Microsoft is doing with Excel and AI is hardly on par with other AI projects, such as IBM’s Watson. And yet, its example — and Robinette’s concerns — is very telling for far more sophisticated projects. In short, after your AI system has generated results, you still need a plan on how to use them. AI needs to be actionable, said Aman Naimat, senior vice president of Engineering and Technology at Demandbase. “While businesses are adopting AI quickly, it is unwise to simply apply AI and a data scientist on top of a business problem,” he said. “AI needs to involve action and be incorporated into a larger workflow. Without action, you’ll drown in a sea of data.”
First Understand the Business Problem
Sometimes it's easy to get caught up in the excitement of new technology's availability. Be wary that your organization isn't implementing AI because it seems cool or just to say we've done it. Have a clear business case and understand the problem you are trying to solve.
“Not having a clear plan for what business problem AI is solving is what most often leads to scenarios where the results are ‘bad’ or not usable,” said Sajid Mohamedy, chief strategy officer of NoiseGrasp.
Related Article: 7 Ways Artificial Intelligence is Reinventing Human Resources
Be Forward Thinking
Decisions that will be made based on the findings, need to be created in advance of finding results, said John Whalen, partner at Brilliant Experience — and no deviating from that decision once the results are ready. “If you get evidence, but choose to ignore it or have it help augment cognition, why even build it in the first place?”
According to Whalen, another action item to be undertaken prior to starting is deciding what the definition of “done” is. Such advice, though, can fall into the category of easier said than done. To help readers navigate this particular piece of the AI journey, we have assembled this advice from experts.
Optimized and Explainable
Make sure an AI-driven application is optimized and explainable before it is integrated into business processes, said Sheldon Fernandez, CEO of DarwinAI. Optimized applications use less compute power and require less space, resulting in lower operating expenses and often allowing for the computing resources to stay contained within a company’s infrastructure or on a consumer device, he said.
For the uninitiated, explainable AI is a concept gaining traction among researchers in which AI and how it comes to its decisions are more transparent to users. “The explainability of an AI application is equally important as organizations must be able to justify the decisions made by AI applications, especially in heavily regulated industries such as banking,” Fernandez said. “Without a way to trace decision-making processes and determine the specific data and analysis used to reach a particular conclusion, implementing AI-derived results poses a variety of regulatory, legal and ethical challenges.
Related Article: 11 Industries Being Disrupted By AI
Appoint an Internal Person to Ensure Quality AI Results
Ask yourself, ‘what is the impact if the AI gets this wrong 1% of the time, 10% of the time or 100% of the time,’ said Andrew Konya, CEO and co-founder of Remesh. It would be considered a critical application, if AI getting wrong 1 percent of the time could potentially have a large negative impact. C
On the other hand, it’s a normal application if AI got it wrong 10% of the time. In the case of ‘normal’ applications it is acceptable to trust the software vendor or internal group that built the algorithm, Konya said. “Basically you trust the software vendor to ensure the AI’s quality.” In the case of critical applications, he advises having an internal person responsible for AI’s quality of results.
“AI is not a magic bullet — it’s unwise to expect it to solve an organization’s highest-value strategic problems from the jump,” said Sajid Mohamedy, chief strategy officer of NoiseGrasp. “Begin by identifying small, yet impactful business challenges where AI can be applied and iterate from there.” This may seem obvious now, but Mohamedy said that oftentimes people forget it as they get into the rigors of planning and implementing AI.
Consider How All Stakeholders Might Use Data
Often there is one group, such as a product owner, that might use the findings but there might also be other groups such as marketing and sales that could benefit, said Brilliant Experience’s Whalen. “Those crafting the AI system may be able to structure the outputs such that the findings are relevant to a wider audience,” he said.