Artificial intelligence helps determine which products are advertised to which consumers, who receives a job interview, who qualifies for certain credit products and a host of other decisions, according to Mitigating Bias in Artificial Intelligence: An Equity Fluent Leadership Playbook.

However, the publication adds that, "use of AI in predictions and decision-making can reduce human subjectivity, but it can also embed biases resulting in inaccurate and/or discriminatory predictions and outputs for certain subsets of the population."

While marketers and others rely on AI to help target the best prospects for a company's products and services, and to target potential employees, they also need to take steps to eliminate any unintentional bias from the AI algorithms, not just because it's the right thing to do, but also because any underlying bias can keep their marketing messages from going to good potential customers.

Technology and marketing experts recommend the following four ways to eliminate or at least minimize bias in AI.

Review the AI Training Data

AI has made our business processes smarter and more efficient due to its data-driven results, said Dror Zaifman, director of digital marketing for iCASH. "We make sure that AI bias doesn't exist by understanding our training data. The academic and the commercial datasets are the major cause of bias in AI algorithms. We have a team of dedicated data scientists who cross-train employees in different departments to understand how AI bias works and the best way to combat the problem."

The data scientists ensure that the data gives a full picture of the diversity relayed to the end-users, Zaiman added. The data team carefully designs all the cases and the cause of action to avoid any discrepancies. To minimize bias, we need to carefully take into account the background and experience of different individuals. As clients use our model, they would provide us with feedback and how the model would fit into the real world.

"Ignoring the end-users would have drastic consequences for our organization, as we would be blind to the user experience and how we could optimize its performance," Zaifman said.

Check and Recheck AI's Decisioning

In the past, with manual lead scoring models it was somewhat easy to inspect the manual models for scoring elements that could be considered discriminatory, this might be harder to spot in AI models, which require more specialized skills to understand them, said Christian Wettre, senior vice president and general manager, Sugar Platform, for SugarCRM.

"A best practice is to enable the AI to be prescriptive but always transparent, to enable business users to review the application of the AI, so that it can always be corroborated by the business," Wettre said."While there is a lot of attention given to the issue of potential bias in AI, and while AI might not be perfect, it actually can eliminate a lot of biases that are introduced by humans — a human built scoring model is subject to the biased beliefs of its developers. Those building the model select the attributes and engagement actions of a lead to score and assign the relative weight of these attributes and actions.

Learning Opportunities

However, while many intent indicators lead to conversion on paper, they may have very little correlation in practice, Wettre added. The AI decisioning should be able to be checked by humans. When there is transparency in the usage of AI, humans and technology work together and hold each other accountable to mitigate discrimination in modelling.

Get Direct Input From Your Customers

"We do a good bit to eliminate bias in our AI algorithms," said Baruch Labunski, founder of Rank Secure. "You have to look at the limitations of your data and then look at the customer's experiences. We do that by actually talking to customers from time to time to collect a sampling of their personal experiences with AI. That means we personally contact them by email or phone and ask about their experience. We go through the AI experience with our vendor to understand what the customer is experiencing. Once we experience it for ourselves, we can find issues that need correcting. That is how you find bias."

Labunski added: AI can document answers but doesn't record the nuances. It doesn't understand sarcasm. Looking over communications with AI bots helps us understand what is missing and create a system to overcome that. We do that by having call center representatives document any complaints from customers using our AI system. Then, we have our vendor look at that particular algorithm to fix any issues.

Use Constant Monitoring to Prevent AI Bias

Prevision.io uses a five-part framework for ethical decision-making in data and machine learning projects, said Nicolas Gaude, co-founder and chief technology officer. "We organize it to align with the five distinct phases of a data project: initiation, planning, execution, monitoring, and closing. That way we are constantly monitoring that there are not biases present in our AI."

Even though there are precautions in every phase to help prevent bias, review and monitoring of results is critical to ensure that unintended bias doesn't creep in during earlier phases.

Before kicking off the initiation phase, the company considers the law, human rights, general data protection, IP and database rights, anti-discrimination laws and data sharing, policies, regulation and ethics codes/frameworks specific to sectors (e.g. health, banking, insurance, employment, taxation), Gaude said. Then the company considers the limitations of data sources limitations, data manipulation awareness and consent, and the risks of data analysis and aggregation.

The monitoring phase involves data consumption awareness and consent, sharing data and outcomes with others, openness and transparency with data disclosers, Gaude said. "Our closing phase involves documentation, ongoing implementation, reviews, and iterations of ongoing data ethics issues, and considering how data is being disposed of, deleted, and/or retrained."