The Gist

  • Use first party data to ask permission. Giving customers autonomy over who (and what) can view their data helps avoid privacy lawsuits.

  • Don’t let bias ruin your brand. Inserting brand bias may deter customers from your services. Instead, use humans to ensure proper customer recommendations. 

  • Better CX comes from technological improvements. Flaws in AI are actively inhibiting positive customer experiences, so change must begin with the tech, not the customer.

AI is transforming our lives — sometimes completely under our noses. For marketing and CX professionals, it’s rapidly revolutionizing the way they can suggest, recommend and provide the content and products consumers desire. Although it’s great that machine learning is proliferating the customer experience, it comes with a myriad of ethical concerns.

Amongst other things, brands must eliminate potential bias when it comes to customer recommendations; product promotions should be tailored around a consumer’s interest, not a brand’s pre-set agenda. Furthermore, they must be transparent with customers about how their online decisions influence AI inputs.

It’s no secret these types of concerns must be addressed and ameliorated, but many organizations don’t know how to set proper boundaries between consumers and AI. We talked with two CX experts about how to navigate these problems and implement real change.

Ensure Privacy Through First Party Data

Justin Racine, director and lead commerce strategist at Perficient, believes first party data collection is essential to the success of future AI-driven marketing campaigns. This type of data collection requires brands to ask consumers’ permission to share their personal data to receive better content or product recommendations. This way, AI can’t access this information unknowingly — even if the intent is to bolster CX.

“Leaders need to be the source of backing for ethical AI, and legal and compliance departments should educate current employees and possibly bring in new roles to help create boundaries and ethical compliance.” Racine said. “With that [being said], brands must do their homework and build safeguards to protect consumer data at all costs. We have seen how drastically harmful data breaches have been over the last 20 years, and consumers can drop a brand as quickly as they started purchasing from them if this happens.”

Lawsuits are popping up everywhere as a result of AI privacy breaches. According to a report by JDSupra, a number of lawsuits were recently filed against ChatGPT for breaches of user privacy. They noted that “ChatGPT uses the information it collects about users in violation of the provisions of privacy protection laws. Such violations include in relation to the issue of transparency, the time frame for retention of the information, the purposes for information processing, and the identities of other parties to whom the information is forwarded.”

Related Article: Generative AI: Exploring Ethics, Copyright and Regulation

Use Humans to Eliminate Brand Bias

Implementing AI is ultimately about what's best for your brand, but sometimes that comes at the cost of the consumer. Algorithmic bias is an ongoing problem when it comes to digital marketing, partly because it isn’t self-reliant. It needs human intervention to help suggest products and content — otherwise it may suggest products that aren’t relevant to the consumer, thus deterring them from your brand and resulting in a loss of profit.

“It’s not an exact science yet — but a good general rule of thumb to follow is looking to A/B test some scenarios within this bias realm,” Racine said. “In doing so, always remember to put the consumer and their needs first; brands need to strive for organic native conversions, not manipulation tactics.”

Companies must also recognize that customers will have a certain affinity to a particular brand, so if AI presents them with an alternative option, they may be inclined to ignore it. Since AI has yet to master brand loyalty, human intervention is key to marketing products they’ll be likely to purchase.

Racine offers a scenario to demonstrate this.

“Take for example my love for Ralph Lauren T-Shirts. Let’s say I buy these shirts from Nordstroms — sure, I could go back to the site and AI could suggest similar brands for me to buy and might even say, 'The fit is the same or your money back.' All of that is well and good, but my connection with Ralph Lauren is deeper than just the shirt fit because I love what the brand stands for, their aesthetic and their social media content -– all of that factors into my loyalty. However, could AI start to learn that I’m loyal and make suggestions for a different brand to take on the same characteristics that RL brings to mimic the experience? Sure. But will it work? Time will tell.”

Learning Opportunities

Make Your Technology Inclusive

Shameem Smillie, CCaaS consultant and founding member of Women in CX, believes chatbots and other conversational AI have a long way to go before satisfying every customer. This is partly due to technological limitations that make chatbots inclined to favor Western men over any other demographic of people. In fact, PWC reported that Natural language Processing (NLP), the branch of AI that helps computers understand and interpret human language, demonstrates racial, gender and disability bias.

“From my own experiences, digital personal assistants and voicebots understand men much better than women," Smiliie said. "Especially in the earlier days — which is an example of skewed data sets that train AI algorithms to favor Western accents and men. Scientifically, the differences between the audio make-up and tones that women versus produce are at a higher frequency than men (FO Men 120Hz, and FO women ~200Hz).”

Although conversational AI seems to have a mind of its own, real people are behind the process and production of customer algorithms. Allowing unconscious bias to slip through the cracks makes it easy for AI to both implement and perpetuate this. To make technology more ethical and inclusive, Smillie believes it’s critical for the data scientists and business leaders who develop and instruct AI to to identify such potential biases by monitoring data and minimizing outliers.

“At a basic level, AI bias is reduced and prevented by comparing and validating different training data samples for representativeness. Ensure that you consider that what was acceptable yesterday may not be acceptable today. Without this bias management, any AI initiative will ultimately fall apart," Smillie said.

Related Article: Dealing With AI Biases, Part 4: Fixing the Root Cause of AI Biases

Be Proactive and Communicate About Ethical AI

Despite current limitations, businesses and AI can work in tandem to deliver both sterling customer experiences and improve the ethos of a company’s brand. However, this is not without keeping consumers in mind. To protect them from privacy breaches, it’s imperative to ensure compliance with data regulation laws GPPR and CCPA and prioritize data protection, intellectual property (IP) ownership and cybersecurity.

Smillie also notes the importance of communication; your company or organization should openly discuss its definition of “ethical AI” and have a failsafe set in place to protect customers from bias and privacy breaches.

Much like regulatory annual compliance training, ethical AI should be continuously enhanced (with updated employee training). And most importantly, make sure your company’s mission and purpose aligns with the type of AI you choose to implement.