Five years ago, Microsoft released its infamous bot, Tay, into the world of Twitter. Tay used a machine learning algorithm to learn from interactions on the platform, to then echo novel responses back based on that learning. Within short time it became obvious that Twitter is not ideal ground for unsupervised learning. It turns out that the people who scream the loudest aren’t always the best teachers.

Without a filtering mechanism, Tay started parroting back all kinds of racist, bigoted and misogynistic tweets. So naturally a question arose: how can we expect AI to learn on its own by engaging the world without letting the world corrupt it with its worst demons?

AI has helped us accomplish a lot, especially over the last decade. What’s more amazing, everything we accomplished with AI is but a small fraction of its possibilities. However, all of that unchecked potential needs a little bit of Peter Parker's “with great power comes great responsibility.” Making sure AI's potential is used for the greater good of everyone is our responsibility. Responsible AI or Ethical AI focuses on ensuring current and future AI does exactly that. Members of academia (like MIT and NYU), the industry (like Microsoft and Google) and independent organizations (like Ethical Institute, Algorithmic Justice League and Data Council) are leading this effort. 

Ethical Issues in AI

According to a 2019 survey on ethics in AI, Capgemini found 62% of consumers said they would trust brands more if they perceived their AI interactions as ethical. However, as many as 47% of them responded they'd encountered at least two incidents in the last 2-3 years of AI creating ethically questionable outcomes. Executives shared this sentiment, with 90% believing ethical issues arose from the use of AI systems. Organizations are not turning a blind eye to this issue. As many as 41% of executives reported that they discontinued AI initiatives when ethics were compromised.

Related Article: Let Ethics Guide Your Use of AI

The Major Causes of Ethical AI Breakdowns 

In project planning and execution, we tend to cater to the requirements that drive immediate business outcomes, like ROI, user features, performance, etc. Requirements with hard-to-quantify immediate value, such as testing, security, compliance and privacy, tend to take a backseat in most prioritization exercises. Similarly, so does Responsible AI. In digital, many things can lead to ethical incidents with AI but we can distill them down to four major causes.

  1. Awareness: Before anyone can account for Responsible AI in their digital work, they have to be aware of it and its impact on society. Considering how recent this field is, few decision-makers are.
  2. Budget: Doing things right often means spending more money. Ensuring your AI is responsible means spending the money on additional effort focused on creating safeguards to do so. While recent surveys indicate that organizations are prioritizing responsible AI, often times it is still not high enough a priority to warrant additional budget.
  3. Time: When budgets are there, time might not be. Anyone in the digital world knows that deadlines for go-to-market are usually missed because of unforeseen factors. That means anything that would take more time than absolutely required, like ensuring principles of Responsible AI are enacted, would be de-prioritized to future phases.
  4. Governance: If we get past the budget and time issues, lack of strong governance can cause Responsible AI principles not to be met. Responsible AI isn’t just a one-time exercise, rather its a stream or discipline with continuous effort and dedication. 

Related Article: Make Responsible AI Part of Your Company's DNA

Principles of Responsible AI

So how do we adhere to Responsible AI? A ton of resources are available on this topic, so I'm not going to reinvent the wheel here. I will, however, summarize and paraphrase the ones that everyone should be aware of and consider. This list is by no means exhaustive because there are certainly additional areas of focus worthy of our attention to ensure Responsible AI.

Learning Opportunities

  1. Fair and Unbiased: We educate and teach our kids every day to be fair and unbiased towards people in our society. How do we make sure that a similar type of guidance is adopted by AI? Think of a recruiting app using AI to parse and sort through candidates. Any data set might contain inherent biases that an AI model might reinforce to discriminate against some candidates. It is our job to ensure this type of AI model is designed to operate around these biases. Even more than that, AI models should be designed to detect bias and errors in blind spots to make the outcome fairer. Another example is how facial recognition software often mischaracterizes black women as male, even famous figures like Michelle Obama and Serena Williams. How can we build a future of AI triumph if it is rooted in these types of unfair and biased decisions?
  2. Reliable and Safe: AI is becoming more and more a part of our daily lives. One area where it is growing exponentially is self-driving cars. AI allows car computers to evaluate risks and inputs to recommend the next move. Having said that, that AI model needs to incorporate safety guards just in case what is being recommended is not necessarily the safest action. Guarantees need to be built into self-driving cars so the AI cannot make safety mistakes. It's one thing when an AI makes a simple mistake in a chatbot or conversational assistant, but a mistake in a self-driving car could be fatal. Other fields where  AI's reliability and safety are key include security, defense weaponry software, airplane navigational equipment and medical devices.
  3. Inclusive: Being fair is important, but we also put measures in society to be more inclusive. Inclusivity is not guaranteed out of simple fairness because some people might still be at an unfair disadvantage because of a disability, racial injustice, systemic racism, social impact, etc. The need to ensure that AI models are inclusive is critical for responsible AI.
  4. Private and Secure: AI used in surveillance is evolving and advancing every day. It is our responsibility that these AI models are guarded with governance to ensure privacy. While a myriad of use cases can be found where surveillance is lawful and moral, one doesn’t have to look far to see instances of abuse that violate basic human privacy.

Related Article: A Manifesto for the Pioneers of the Digital Revolution

Will AI Adhere to Social Contracts?

Human beings are usually bound by what is known as a social contract. A social contract is both an implicit and explicit understanding of how to behave with other people. Implicit in the sense that it is understood without being written down anywhere, such as shaking someone’s hands when you greet them. Explicit understandings are written rules or laws. Humans who break those contracts face social or legal ramifications.

The contracts vary from country to country, culture to culture but remain similar in most ways. In the end, we are all guided by these rules (written or unwritten) whether we follow them or not. How can we expect an AI that is still learning about the world to comprehend these nuances, especially the unwritten rules? Just as parents are responsible for their children’s actions when they are adolescent, we are responsible for any unethical or irresponsible results from our AI models.

Related Article: IBM and Microsoft Sign 'Rome Call for AI Ethics': What Happens Next?

What’s Next?

It is reasonable to assume that most people find Responsible AI important, so what can be done from an individual and societal level to support it? Well, for one, we need much more advancement in technology to make it easier for us to implement Responsible AI. Also, for Responsible AI to succeed in a capitalist economy, we need to find ways for it to show financial returns to attract investments in research and adoption. In parallel, we need our governments to legislate laws that guide us and regulate our AI tools ensuring ethical and responsible use. We also need mainstream awareness to highlight how critical Responsible AI is in our civilization. Finally, we need trail blazers who can advance Responsible AI with innovation rather than following the slow march of progress.

At the end of the day, systemic regulations and government mandates around Responsible AI are hard to accomplish and unlikely to happen. It is up to each organization and person who is working with AI to follow ethical guidelines and guardrails around it to ensure that it behaves in accordance with our society’s values. It is our responsibility, no one else’s.

fa-solid fa-hand-paper Learn how you can join our contributor community.