Marco Iansiti and Karin Lakhani write in their book, Competing in the Age of AI, “the learning algorithms at the heart of new digital systems can be misused to tailor, optimize and amplify inaccurate and harmful information from targeting and shaping misleading ads to creating highly realistic fake social personas that are used to extract personal information from users.”
The question is what should CIOs and other data leaders do to protect enterprises and their key stakeholders?
Here are their thoughts:
Potential for Ethical and Privacy Issues
CIO Anthony McMahon of Target State Consulting suggests in a recent #CIOChat on Twitter that ethical and privacy issues are not unique to artificial intelligence (AI). “Every decision on how to use data, either in platform or offline, has an ethical consideration,” he said.
Other data leaders claim there are all sorts of unique ethical and privacy issues to AI. Privacy is particularly important when a piece of data becomes connected in a new and novel way.
For this reason, data modelers and algorithm designers need to consider the need for privacy, provenance, as well as the impact of each potential use and design. CIO David Seidl of Miami University says, “There's a temptation to leverage the data you have, without thinking about the human cost, impact and outputs.” This is where diverse representation in implementation and oversight is incredibly important.
Data leaders need to carefully consider many questions including:
- What data can be included?
- Who can view the data?
- How to create algorithms that don't make unethical/biased decisions?
Related Article: How Clean Data Supports Consumer Privacy Efforts
Implement a Code of AI Ethics, Governance
It should be clear that the more powerful the technology, the more likely there is to be ethical and privacy issues. AI is fraught with issues because the most interesting scenarios require that AI have deep access to business data including customer information. The challenge can be to understand what the ethical issues are and to have a code of ethics that responds to them as they occur. For example, where does facial recognition software cross the line in retail application?
To respect data privacy and ensure ethical AI, businesses need to establish overarching AI governance policies. The goal should be to safeguard the firm and its customers.
According to analyst Dion Hinchcliffe of Constellation Research, “the best companies have ethics reviews of AI usage. Vendors need to make sure bad actors are not using the technology to exploit.”
Nevertheless, you can't predict every potential outcome; there are surprising results, and you can’t anticipate everything. For this reason, it is important to conduct a failure mode analysis for data, machine learning (ML) and their application.
Related Article: Ethical AI Is Our Responsibility
Get Ahead of Potential Bias in Your Machine Learning
It is difficult to find ML algorithm bias. Biases creep in because humans create the algorithms in supervised learning or in the selection and creation of training datasets. This is what Microsoft learned with Tay.
For this reason, CIOs and data leaders suggest that organizations need to be prepared to pull the plug, quickly. Hinchcliffe claims everyone needs to realize that “we're in early days in dealing with AI bias." We've learned how many there are and are learning how to respond, he added. "In the end," Hinchcliffe says, "the solution is likely to have generative systems create outside of our biases.”
Alongside testing, it makes sense to run AI in a very constrained use case before releasing AI into the wild. Meanwhile, controls should be in place to provide a client view into how AI algorithms assess and use data and to determine if there is bias. It is important that data practitioners determine early:
- Is the dataset biased?
- Is the algorithm biased (on purpose or not)?
- Will a chatbot become toxic after it's released?
This is clearly an ongoing process. Everything should start by having diverse representation in the teams that build the algorithms and conduct oversight. Having diverse teams implementing your AI is the best place to start removing bias, but it is important to look for bias at all stages of the development process. This means ensuring that bias indicators, including gender, aren't used in learning, but are used in testing for bias.
Use Diverse Datasets, Test Transparency
Another action is to define upfront is how you'll test for bias, meaning it is important to:
- Intentionally use diverse datasets.
- Create a strong ethics-based grounding for the organization.
- Test transparency wherever possible.
You need to think about the types of data you are including in your decision-making process. Analyst Dan Kirsch of Techstrong Research claims it is important to, “Attach an executive to AI and ML projects and give them responsibility to ensure ethical use of data and the AI-derived results. You need to keep in mind human decision-making is highly flawed. Look at what's going with the NFL coaching decision-making. Is there explicit bias? Probably mostly no, but there is bias. Hopefully ML and AI-based decisions will help to reduce biases. But it will also add new biases.”
Related Article: Marketers: Choose the Right Performance Metrics in Machine Learning Models
Where AI Algorithms Meet Compliance
Changing governmental regulations remains a trend and a challenge. It seems clear more countries will tighten up their privacy laws. This will likely force organizations to ask their users whether it is OK to submit their personal data to algorithms and AI. The fact that some organizations don't do this now speaks to the maturity of the industry.
This is even though GDPR has been around for several years. While it is focused on impacts to EU citizens, smart companies have worked to apply its edicts across the board. One CIO in New Zealand shared that an organization needs to be GDPR-compliant if they hold information on people who live in New Zealand but are EU citizens.
In the US, a lack of coherent data privacy legislation remains problematic. This makes it difficult for organizations only operating in the US. A key answer may be more organizations asking, "should we?" instead of "can we?" And, "Does this hurt people?"
Organizations should operate under a policy of delegated authority. Here, AI is directly accountable to a person and only has the same rights that a person has to the data.Top compliance challenges with algorithms and private data include:
- Not gaining consent
- No monitoring/testing of compliance.
- Leakage/theft of PII
- Violating relevant regulations
- Unintended consequences
Why Consent and Right to Be Forgotten Matter
It is important that organizations consider the permission to use the data and the right to be forgotten. They need to protect against data misuse and only use data where permission has been granted. Kirsch, for example, says, “The concept of consent and AI is challenging. If you walk into a store, you might be consenting to facial recognition technology and other ML-based tracking without even knowing it.”
For this reason, organizations need to set up corporate-wide data and AI compliance and regulation policies. A good starting point is auditing. Look at the data design, model proposal and intent. Also, look for bias via an AI bias review. Compliance in AI shouldn’t happen after the fact; it should happen with organizational intent.
With this said, auditing is not a one-and-done. You need to continually audit projects to make sure they remain in compliance. Compliance shouldn’t be a checkbox for enterprises deploying AI/ML. Instead, there should be a process for identifying the data, conditions of use and origin. With this, leaders need to shepherd it effectively through model and model applications.
Related Article: How AI Is Being Used to Protect Customer Privacy
Acknowledge Ethics, Compliance Threats
CIOs and CDOs need to start the conversation regarding ethical use of AI with more than business, legal and technology teams. The first step is acknowledging the threats. It is important that CIOs and CDOs not bury their head in the sand here. Instead, they should take concrete actions to minimize risk. This way organizations have an ethical and consistent approach.
Specifically, business should establish an AI framework that includes:
- Risk assessment
- Rules and policies
- Performance monitoring
- Being run by data privacy controls
Hinchcliffe says, “Start an AI ethics program carefully, as it will widen more than you expect. Be ready to learn a lot and adapt. Responsible AI and ethical AI are two key practices to begin with.”
Capgemini’s Steve Jones of Capgemini says it is critical to:
- Have an ethical charter
- Have a full lifecycle approach to trusted AI
- Enforce business ownership of AI
Clearly, AI is behind data in terms of maturation and compliance. But there are clear business risks. And data leaders say now is the time for AI governance. As with any technology, maturity is critical to maturity of a technology application.
Learn how you can join our contributor community.