For some, the mention of artificial intelligence and machine learning conjures images of Skynet deploying armies of self-aware humanoids to destroy society.
A more serious and reality-based concern for consumers is that AI may further compromise their data privacy online. The findings of a recent study by professional services firm Genpact reflect that concern. The professional services firm reported that 71 percent of more than 5,000 consumers polled in the U.S., the U.K. and Australia said they didn't want companies to use artificial intelligence (AI) that threatens to infringe on their privacy, even if it improves the customer experience.
While the marketing technology industry may not be able to stop the Terminators, we should try to educate consumers about AI and allay their fears. The viability of AI-driven marketing technology depends on our ability to show we can use it responsibly and for their benefit. AI is making previously unmanageable masses of consumer data actionable for marketers, and it is therefore imperative that marketers prioritize a transparent and responsible data strategy.
Show Customers You Care About Their Data
In light of all the highly visible data breaches in the last few years, we must prove to customers we are as concerned about their data as they are. And this has to be more than just lip service. The data of your customers (and of your customers’ customers, if you’re a martech company) must be handled carefully and transparently. Trust has to be earned, and the proactive steps you take to protect data and communicate your policies will help your company build trust with consumers.
Here’s a few basic best practices that marketers and marketing technology vendors can adopt to reassure consumers.
Related Article: Why the Benefits of Artificial Intelligence Outweigh the Risks
Make Your Privacy Policy Accessible
Let’s face it: Almost nobody reads privacy policies.
NPR reports that researchers from New York University and the University of Connecticut conducted a study in which more than 500 volunteers were asked to sign up for a fictitious new social networking site. The privacy policy for the site stipulated that payment would be satisfied by surrendering your first-born child. Ninety-eight percent of the participants accepted the terms. We’ll put some faith in humanity and assume that means almost nobody read it, and for good reason.
First, it would take an inordinate amount of time to read every privacy policy. Second, they’re generally piles of incomprehensible legalese designed to provide legal cover for companies. During Facebook CEO Mark Zuckerberg’s congressional deposition, Sen. John Kennedy (R-La.) summed up Facebook’s user agreement issues succinctly: “Your user agreement sucks. The purpose of the user agreement is to cover Facebook’s rear end. It is not to inform your users about their rights .... The average American needs to be able to understand.”
Kennedy is correct — your user agreement should make sense to your users. While you’ll still have to work with your lawyers on this one, user agreements can and should be written using language that non-lawyer humans can understand.
When was the last time you, as a marketer, actually read your company’s privacy policy? As soon as you’re done reading this column, go read it. Make it clear what you share and what you don’t. Above all, don’t make it intentionally confusing and follow your own policies like the letter of the law. Because, legally, it is.
Related Article: Ready for Understandable Privacy Policies? A Look at GDPR's Impact
Practice Data Etiquette
Data breaches are more than a public relations nightmare. Any hack, data breach or improper use of customer data can cause permanent damage to your company’s reputation. Your chief marketing officer and chief technical officer should be working in concert to make sure that customer data is handled according to industry best practices and seeing that measures to protect it exceed regulatory requirements.
In the case of securing voice data and call recordings, your call intelligence platform should be able to automatically redact sensitive information like Social Security numbers, credit card numbers and bank account numbers. Other information security compliance rules may apply to your business; it’s a complex landscape. A professional cybersecurity assessment will help you figure out what applies to your business.
Additionally, make your certifications, independent data security audits and SSL certificates apparent. Write blog posts about them, issue press releases, and display compliance seals on your site. A commitment to proactive security and privacy is a marketing opportunity to build trust, and it is a way to let customers know that their data is in good hands.
Related Article: 10 Tips for Protecting Data in the Workplace
Educate Consumers About AI
As marketers, we all can and should be doing a better job of making sure consumers understand artificial intelligence.
If you’re a marketer in a business that uses AI, dig in and study the basics until you can explain it to your nontech friends without their eyes glazing over. One of the most effective ways of calming people’s fear of AI is explaining what it is, what it isn’t and how your company is using it.
It may be hard to believe, but AI isn’t new, and it’s not a product of Silicon Valley engineers and entrepreneurs.
The field of AI research was born at a workshop at Dartmouth College in 1956. By 1959, computers were playing checkers better than most humans, speaking English and proving logical theorems. AI’s founders were optimistic about the future: In 1956, economist Herbert Simon predicted that “machines will be capable, within 20 years, of doing any work a man can do.” Of course, this was overly optimistic, and the lack of understanding of the technology caused government funding to dry up, and an “AI winter” ensued. After a short resurrection in the early 1980s, AI fell into disrepute again and commercial research didn’t start again in earnest until the ’90s.
Learning Opportunities
Will the current of fear that surrounds the technology wash it into another period of dormancy? That’s not impossible. While consumers are heavily dependent on AI for everything from data analysis to voice recognition, they’re not always aware that AI is making certain interactions possible. As technology advances, people tend to dismiss new AI by seeing it as a mere computation, not intelligence. Describing this as the “AI Effect,” AI researcher Rodney Brooks said, “Every time we figure out a piece of it, it stops being magical.”
If more people realized that things like personalized online experiences, voice assistants and autonomous cars depend on AI, and if they understood how that type of AI works, they might appreciate AI’s value a bit more.
Related Article: 5 Drivers of Personalized Experiences: A Walk Through the AI Food Chain
Prove the Benefit of Your AI
Many people worry about the privacy issues stemming from AI — until they learn that they’re getting something out of it. Younger consumers in particular are quite open to sharing data as long as they get something in return.
In a recent study conducted by YouGov on behalf of customer experience company [24]7.ai, 43 percent of the more than 1,000 consumers surveyed said they would exchange personal data with companies to save money through personalized promotions, discounts or deals, and 39 percent said they would do so in return for speedier resolution of problems. Likewise, a study by my company, Invoca, found 64 percent of consumers even expect companies to use their data to direct them to the right person on the phone.
As long as they feel their data is safe and they know how it is being used, more and more consumers will be willing share it. AI is just the big scary gorilla in the room right now, but once you can show a clear benefit of your AI, the fear will likely subside.
If you tell your customers you’re using AI to eliminate hold times, they will probably decide they love artificial intelligence. If you explain your AI can make their shopping experiences faster and easier, they’ll demand it. Once it’s clear the technology can help provide smarter service both online and off, the fear of AI can quickly turn into demand for more.
If a Breach Happens, Own It — and Fix It Yesterday
When a data loss incident is inevitably tied to an AI-powered technology, consumer perceptions will sway depending on the way the affected company handles it. Retaining consumer trust requires proactive measures, immediate transparency, and simultaneous action in the case of a breach.
The best of all best practices is to not wait around for a breach to happen. Update security measures regularly and submit to third-party audits to prevent internal bias from blinding you to problems. Second, if a breach occurs, make it public before the media does. Security reporting laws in the U.S. only require companies to inform consumers when identifiable personal information has been disclosed. But showing that you are paying attention by publicizing thwarted hacks and addressing more minor issues (and doing so quickly) will also help instill confidence in consumers.
Large or small, vulnerabilities that cause any data breach need to be fixed quickly and should always lead to increased security awareness in your organization.
Related Article: Don't Be the Next Equifax: Tips to Avoid a Security Breach
The Future of AI Is in Our Hands
The seemingly endless list of ways in which AI can benefit our bottom lines and enhance our lives is understandably instilling a feeling of intense optimism in everyone involved. However, we should not be blinded by the drive for fast innovation and forget our responsibility to put consumers first.
We can learn from Zuckerberg’s regret from moving too quickly and breaking too many things. “For the first decade, we really focused on all the good that connecting people brings. But it’s clear now that we didn’t do enough. We didn’t focus enough on preventing abuse,” he said in a press conference following the Cambridge Analytica debacle. “We didn’t take a broad enough view of what our responsibility is, and that was a huge mistake.”
Using AI for nefarious purposes or taking a flippant approach to data privacy concerns will only undermine the industry. Consumers have a right to know what is being done with their personal data, and if you let them know how you want to use it and provide a valuable experience in return, most people will be happy to share it with you and will welcome an increasingly AI-powered world.
Learn how you can join our contributor community.