Singapore-based technology company Tenqyu combines augmented reality, location intelligence, urban big data, and real-time streaming APIs to build urban experiences. It invests heavily in automation and machine learning and has an in-depth understanding of the technology, said CEO Jan Semrau. But when it developed a Twitter bot using some of this technology there were some unintended side effects, he said, and the bot began recommending events “left and right with sometimes hilarious outcomes.” The bot was eventually banned by Twitter, which was not so funny but Semrau had learned his lesson. “In all machine learning and/or AI implementations, the forecast, decision and classification should be accompanied by a score and an evaluation as to how much the score can be trusted. These values need to be continually monitored to ensure that the trained model is still performing within defined parameters and has not experienced a significant population shift leading to an over-or-under fitting of the model,” he advised.
AI Imperfections Can Be Addressed
Tenqyu’s experiences show that you don’t have to be a Microsoft or Flickr to be embarrassed by unintended results from an AI deployment. They also show that oftentimes these issues are solvable, either in hindsight or during the planning process — preferably the latter. There has been much written about the necessity of training an algorithm against unbiased data and that clearly is an important first step. But there are also others a company can take to make sure an AI implementation does not wind up embarrassing a brand.
“One of the pitfalls of AI and machine learning algorithms is that they have an innate tendency to perpetuate and amplify biases present in the data and the society at large,” said Niranjan Krishnan, head of Data Science practice at Tiger Analytics. “Untrammeled application of AI sometimes leads to unintended consequences for companies. These consequences include business actions going against the professed values of the company, severe damage to the brand, breach of compliance norms and expensive legal violations.”
Or sometimes, just faulty results. For example, Krishnan said, direct-to-consumer marketing companies use AI and machine learning to identify the most promising pockets of customers in the larger population for their product marketing campaigns. “However, relying on autopilot AI algorithms could lead to differential treatment of parameters like race, gender and education,” he said.
Related Article: Why the Benefits of Artificial Intelligence Outweigh the Risks
Mitigating Against AIs Unintended Consequences
Besides ensuring a data set is clean, there are other measures a company can take to make sure an AI implementation does not wind up embarrassing their brand. Krishnan offered a few tips on how to limit the unintended consequences from AI.
Review Internal Data Transformations of AI Algorithms
“This is crucial because AI algorithms are pretty good at picking up proxies for dubious parameters like race or gender e.g. zip code, language, religious affiliation. It may be necessary to explicitly screen such proxy parameters out as well. AI algorithms are also good at creating their own implicit proxies through data transformations,” he said.
Improve Algorithm Transparency
Improve the transparency of AI algorithms for the business. AI and Machine Learning involve a lot of “black box” techniques whose internal workings are comprehensible only to data scientists, Krishnan said. “Data scientists need to take on the onus of demystifying the algorithms and their mechanics for the business so that any blind spots can be identified.”
Related Article: 8 Examples of Artificial Intelligence (AI) in the Workplace
Test/Evaluate Impact on Consumer
Do a thorough ‘paper test’ of AI algorithm before deploying them on real customers. While companies carry out tests and simulations with their algorithms before deployment they usually evaluate it only from an ROI angle, Krishnan said. “It is helpful to also evaluate whether they produce a disparate impact on consumers with undesirable side effects.”
Have Legal Review
Have data scientists review their algorithms with internal legal/compliance teams. “Yes. This is often hard, slow and painful, but is immensely helpful in the long run. Having the data scientists open up their AI black box and lay it bare before the legal team not only helps them address the concerns raised but also hold them accountable for the decisions made by their algorithms,” Krishnan said.