Establishing ethical use of artificial intelligence
PHOTO: Shutterstock

Artificial intelligence is here and to be successful and effective, AI needs empathy to and for the audience it serves. For the organizations using AI in customer- or employee-facing programs, they need to understand and account for the technology’s biases. 

John Mattison, MD, CIO of Kaiser Permanente, shared those thoughts with an audience last week during his session at AI World Boston. Organizations can do this, he said, by understanding how the biases of “carbon intelligence” and “silicon intelligence” (human vs. machine intelligence) mesh, so that we can "ultimately use exponential technology in ways that help people."

With AI Comes Ethics Challenges

Ethics in AI is top of mind for many organizations, such as IBM, which this year pushed out its “building trust in AI" in which researchers found organizations will “require a significant effort to instill in it a sense of morality, operate in full transparency and provide education about the opportunities it will create for business and consumers.” 

This mandate is especially significant for marketers, who are using AI in their marketing campaigns more than ever. According to an Adobe survey, top-performing companies were more than twice as likely to use AI for marketing as their peers (28 percent vs. 12 percent). The US government defense department is doubling down on AI, too with plans to invest $2 billion over the next five years toward new AI programs.

Related Article: The Next Frontier for IT: AI Ethics

What Can Go Wrong With AI?

Why should marketers care about ethics in AI deployments? The American Marketing Association pointed out some examples of where ethics and AI can rear its ugly head. AMA officials cite these biases as prejudices that stem from data submitted to the AI algorithm.

  • The “Gymnast” search in Google’s image search reveal mostly females in the top results. Same goes for “nurse.” “Parents” shows mostly heterosexual couples. 
  • Joy Buolamwini, a researcher at the Massachusetts Institute of Technology Media Lab, found that gender-recognition AI from IBM, Microsoft and Megvii identified a person’s gender from a photograph 99 percent of the time, so long as the photos were of white men. 
  • A Palestinian man was arrested in Israel in 2017 because, after he posted a photo of himself on Facebook posing near a bulldozer, the social platform’s automatic translation software wrongly interpreted his caption to say “attack them.” He really said, “good morning.”

Asking Ethical Questions Before Marketing Roll-outs

There may be hope. Kristina Podnar, a digital governance advisor, said she’s seeing more organizations improve their marketing processes by asking questions as part of the campaign process. Does the campaign require sacrificing any ethical principles or any aspect of our corporate values? Have we reviewed the credibility of sources that we engage for the campaign? And of course, have we considered the benefits and the risks of what we are doing?

Podnar pushes for thoughtful digital governance in her industry talk and with clients. Nikhil Bhatia, Riversand's senior director of product management, seconds that notion, adding, “One way to safeguard sensitive data is to have a good data governance structure in place. Ownership and accountability,” he added, “should be clear for various stakeholders as data changes hands at different stages of each workflow. This is particularly important given the wide circulation of data that will be inevitable in machine learning and AI projects.”

Related Article: Artificial Intelligence Threats and Promises

Understanding Assumptions in Exploratory Data Analysis

Where there are questions about ethics in marketing, there will be questions in analytics and data collection practices. Pierre DeBois, CEO and founder of Zimana Analytics, said marketers and companies should ensure they have statements in their contracts about misuse of material and misrepresentation of content purpose. “Metrics on their own are harmless,” DeBois said. “It's how they are combined that creates the ethics concern.”

Website metrics are not just about the website, DeBois added. “They tie back into a business strategy or how it operates,” he said. “Most business owners, be it small or large, work with solid ethics but analytics firms have to be aware of misguided efforts, especially with teams being remote.”

For advanced analytics, the deeper concern lies in where data is sourced, according to DeBois. Most analytic solutions are designed to be diagnostic, he said. This means their capability in merging data together dictates if those systems identify an individual rather than a persona. “It's the reason why marketers must understand the assumption used in exploratory data analysis and other data cleansing,” DeBois added. “Those activities can be the gatekeeper in preventing a breach of privacy in some instances.”

Bake Transparency Into AI Processes

A big theme around ethics in AI today for marketers is transparency. Are you upfront about deployments like chatbots? Priyanka Tiwari, senior product marketing manager for Interactions, which provides intelligent virtual assistants (IVAs), said the company recommends users of their AI technology declare upfront to their customers they are interacting with an automated system. “It's ethical to do it that way, and not faking it to be an actual human being,” she said. 

Chris Karnes, head of growth at Going Merry, said marketers should not try to trick prospects and customers because it not only destroys trust but also could build more patience. “Transparency around a technology will immediately create a bit more patience around the experience," Karnes said. "Users are used to dealing with technology and know it's not perfect. They expect more if they think it's a human.”

Related Article: Nobody Cares About Your Cute Chatbot

Use Reinforcement Learning

To build the best AI technology possible, marketers and developers need to need to train AI to be more ethical and can do so by being less subjective. “By utilizing reinforcement learning, developers can reward AI when it self-corrects mistakes and when its outcomes align with the desired, more ethical approach to data processing,” said Ron McMurtrie, chief marketing officer at Sage.

Focus on correcting AI bias, he added. Companies integrating and developing AI need to work proactively to ensure that their AI systems reflect the diversity of their users. “These companies,” McMurtie said, "should seek to build AI using diverse teams, data sets and design to reduce bias and, in turn, social inequality.”

Remember the Needy, Sick in Machine Learning Development

Mattison of Kaiser Permanente wrapped up his ethics in AI discussion by citing former U.S. Vice President Hubert Humphrey’s moral test of government in 1976: “The ultimate moral test of any government is the way it treats three groups of its citizens. First, those in the dawn of life — our children. Second, those in the shadows of life — our needy, our sick, our handicapped. Third, those in the twilight of life — our elderly.”

If we don’t do the same thing when thinking about machine learning and AI, Mattison said, “we will be doing a disservice on all that we do. So I call on all of us to really pay closer attention to these opportunities to create more social equity.”