arrow pointing the way forward
PHOTO: Jon Tyson

Artificial intelligence (AI) and machine learning technologies are becoming increasingly incorporated into consumer products and enterprise solutions alike. As AI applications quickly advance into large-scale and more diverse use cases, it’s becoming imperative that ethics guide its development, deployment and applications. This is especially important as we increasingly apply AI to use cases that impact individual lives and livelihoods — including healthcare, criminal justice, public welfare and education.

It’s clear that to continue the widespread adoption of AI on both a consumer and enterprise level — and subsequently spur continued innovation in the technology — AI technologies and applications need to be trustworthy and transparent.

Combating Mistrust Around AI Technologies

Survey after survey have revealed substantial consumer mistrust of AI technologies. One found only 9% of consumers felt very comfortable with businesses utilizing AI to interact with them.

A similar concern is growing with the use of enterprise AI technologies. According to Gartner, in the next five years, one-third of large enterprise and government contracts for digital products that incorporate AI will require the use of ethical AI technologies. As AI-enabled technologies increasingly become applied to mission-critical operations, business leaders should be more concerned with the ethics and transparency of the AI systems they are leveraging, utilizing and integrating into their business-critical systems.

Leaders need to ensure AI applications do no harm, facilitate the advancement of continued innovation, and provide the greatest benefit not only commercially but also for individuals, communities and society as a whole. But what exactly does ethical AI mean? We explore four areas where trustworthy, transparent and fair AI systems are most imperative today.

Related Article: AI and the Year Ahead: What Now?

1. Use Unbiased Data Sets to Train AI Models

Ethical AI applications consider not only the technology itself but, just as importantly, the data that shapes the technology. If the data used to train an AI system is biased or does not fully reflect the diversity of the constituents it will serve, then it will reflect that bias — which is particularly concerning as ML and AI technologies become applied to critical use cases.

Technology leaders — from product managers to software developers — must ensure that the data sets used to train AI and machine learning technologies properly and fully reflect the audience, constituency or users who will use the technology.

Related Article: AI Bias: When Algorithms Go Bad

2. Ensure Data Privacy and Security Across all Applications

Organizations across all industries, not just technology leaders, need to keep data privacy and security concerns top of mind. Not only do they have to remain in accordance with industry-specific regulations about customer data use, companies must let customers know how they will use their data and if it will be shared, rented or sold to third parties. End users should have the opportunity to provide their informed consent before their data is transmitted, shared or used in any manner.

Furthermore, organizations need to let customers know if they will use their data to train machine learning models.

Related Article: What Data Will You Feed Your Artificial Intelligence?

3. Exercise Strategic and Limited Use of AI for Critical Decision-Making

Use of AI applications for decision-making purposes should be strategic, careful and limited — and always paired with human intelligence — especially for critical decisions such as public welfare, benefits determination, education and healthcare. A report from New York University's AI Now Institute cited several cases that challenge government use of AI and algorithmic decision-making. The common theme across the cases is the lack of transparency in the use of algorithmic decision-making that directly impacts people’s lives. 

In one case, a decision system formula the State of Idaho implemented to automate the determination of disability services eligibility mistakenly caused a sharp drop in the funds awarded to qualified candidates. The State of Idaho faced a lawsuit which resulted in a court order requiring it to develop higher standards of accountability and fix the flaws in the AI algorithm to ensure it awarded funds in an equitable and fair way.

In Europe, the General Data Protection Regulation (GDPR) protects an individual’s right not to be subjected to any decision based solely on automation. While regulations in the US surrounding AI and automated decision-making are still developing, leaders should proactively institute policies to ensure their technology is trained on sufficiently diverse data, know how bias will be detected, how it will be tested and who will gain commercially from it before releasing new AI-enabled applications.

Related Article: What Is Explainable AI?

4. Let Customers Know if They're Interacting With AI 

Nowadays, communications from chatbots can be so granular, nuanced and personalized that it’s at times challenging to tell if we are communicating with AI or a human. Natural language processing (NLP) and other AI technologies have advanced to a degree that enables tailored communications that respond to an individual’s sentiment, context, meaning and intent.

While this offers great benefits in terms of transforming the customer experience, reducing customer service wait times, streamlining operations and providing customers with more responsive and personalized service, there are also drawbacks. Consumers are still wary of chatbots, with 70% of consumers preferring to speak to a human over a chatbot when dealing with customer service and 69% of consumers admitting they would be more inclined to be truthful with a human rather than an AI powered system. Organizations must be transparent with consumers when they are engaging with an AI system and should integrate AI technologies in customer service operations in a manner that still leaves opportunity for human interaction.

Related Article: AI for Customer Experience: Late Adopters Are Reaping the Benefits

Technology for the Greater Good

AI has the power to deliver profound benefits to society and business. The ethical and fair application of AI enabled systems should be a business and moral imperative for organizations, especially those whose technologies impact a wide market segment or whose applications affect critical use cases. Adhering to ethical principles will help ensure AI continues to advance and become widely adopted, have a significant impact in both consumer and enterprise domains, while fostering brand trust.