Modern companies are driven by revenue growth — and that’s a terrible optic. While businesses talk about embracing and “knowing” customers, the never-ending quest for profit growth puts them constantly in sales-first mode, which feels creepy and one-sided. So they’re starting to use artificial intelligence (AI) to bridge the customer gap. They’re mining data, using the insights to understand customer needs, and then better connecting with the individuals they serve — ironically, using AI to become more “human.”

At least that’s the intent. But AI implementations are complex and tricky. It’s easy to introduce unexpected risks, like in these high-profile examples:

  • Microsoft Tay: After launch, Microsoft’s Tay Twitterbot quickly gained 50,000 followers and generated more than 100,000 tweets … but after 24 hours of machine-learning, Tay had turned into an angry, raging anti-Semite and had to be taken offline.
  • COMPAS Recidivism Algorithm: COMPAS, the software widely used in the US to guide criminal sentencing, was exposed by ProPublica to be racially biased. Black defendants were twice as likely to be misclassified as white offenders.
  • Apple Card Bias: Tech magnates David Heinemeier Hansson and Steve Wozniak publicly called out Apple for discrimination when their spouses were offered credit limits an order of magnitude lower than their own — despite a shared banking and tax history.
  • Facebook Campaign Ads: After refusing to police the truthfulness of political ads served on its $70 billion AI-driven network, Facebook sparked a public outcry that charged the company was putting profits far ahead of people and democracy.

Companies have to be accountable for the actions of their AI — or risk collapsing their credibility, brand reputations and bottom lines. They have an obligation to develop their AI in a responsible manner, by focusing on the concepts of empathy, fairness, transparency and accountability.

Acting With Empathy

In the case of Microsoft, it was Tay’s lack of empathy that caused the problem as the bot was not set up to understand the societal implications of how it was responding. There were no guardrails in place to define the boundaries of what was “OK” and what might be hurtful to the audience interacting with Tay.

This is critical with AI. It has to understand not only what’s relevant to the audience, but what is suitable for that audience in that context. It’s the AI developer’s responsibility to define those rules and provide guardrails for the AI as it learns.

Related Article: IBM and Microsoft Sign 'Rome Call for AI Ethics': What Happens Next?

Reducing or Eliminating Bias

AI algorithms will make decisions based on all the data at their disposal. In the case of COMPAS, the developers weren’t intending to develop a racist AI —the bias it uncovered was a reflection of the bias that exists within the justice / sentencing system itself. Building a fair AI requires a focused effort to avoid that kind of unfairness, because even with the best of intentions, innocent looking data might be correlated with protected variables, like gender and age, and introduce problems. So companies also have to regulate AI training data and evaluate the impact of their strategies as they’re being used in the real world to catch bias that might have been unintentionally introduced earlier in the process. This becomes especially important as teams integrate machine learning. Those algorithms adjust themselves rapidly, which can further mask the problem.

Learning Opportunities

Related Article: Responsible AI Moves Into Focus at Microsoft's Data Science and Law Forum

Providing Transparency

With all the negative publicity, it can be difficult to convince consumers that AI is being applied responsibly. The issue in the Apple Card scandal wasn’t necessarily that Apple’s decision-making was biased — it was that the decision was so “black box” that Apple customer service was unsure how to answer the customer’s heated questions, which then went viral. Companies have to be proactive about certifying their algorithms, clearly communicating their policies on bias, and providing a clear explanation of why decisions were made when there’s a problem. They also should consider using transparent and explainable algorithms for regulated/higher-risk use-cases like credit risk. They must make it as easy as possible for frontline employees to communicate the rationale to customers, without compromising proprietary information.

Related Article: The Next Frontier for IT: AI Ethics 

Establishing Accountability

Facebook is taking heat because its used its AI to establish massive influence over consumers but has refused to hold itself accountable for the quality and accuracy of the information being shown in its ads — introducing a significant risk of abuse. It’s scary for consumers, because regulation around technology issues is always at least a few years behind the problem … so simple regulatory compliance just isn’t good enough. Instead, companies must proactively establish and hold themselves accountable for higher standards, and balance the great power AI gives them with the larger responsibility of sustaining relationships with the customers they serve.

Focusing on those four foundations of responsible AI — empathy, fairness, transparency, and accountability — will not only benefit customers, it will differentiate any organization from its competitors and help generate a significant financial return.

Learn how you can join our contributor community.