HAL 9000
PHOTO: Cryteria

In popular fiction, artificial intelligence (AI) often goes rogue. Arthur C Clarke’s soft-spoken HAL 9000 in the movie "2001: A Space Odyssey" turns sinister, killing astronauts on a mission to Jupiter. Remember Skynet, the antagonist with genocidal goals from James Cameron’s movies "The Terminator" and "Judgment Day"?

Fiction now seems to be spilling over into real life.

AI has run amok and been benched in multiple business deployments. In one instance, an AI had to be shut off after it spewed outrageously lewd and racist comments within 16 hours of being launched because of data poisoning. And in another, one Wikipedia AI battled another for years continually correcting each other before finally being discovered.

Why Ethical AI Is Easier Said Than Done

These are not isolated examples. Thousands of AI business users are wary of the technology going awry. As automated, intelligent and autonomous systems proliferate, these people want to know, "What should we do to make AI usable?" The most common answer to that question is, "Use ethical guidelines and explainability frameworks designed for AI." However, that is more easily said than done.

Here's why: The not-for-profit group AlgorithmWatch created a global inventory of guidelines and frameworks that governments, businesses and societies could follow to create ethical AI. The list runs to over 160 items — but it is difficult to find enforcement mechanisms within them. There is more bad news. Last year, one set of researchers examined 22 ethical guidelines, frameworks and principles, to see if they affected human decision-making. The disheartening answer they reached? "No, most often not."

At a meta-level, the problem is somewhat bigger than finding a framework appropriate to a business or has an enforcement mechanism that works. In reality, although many in number, these frameworks don’t have a common, global, agreed-upon standard for benchmarking and implementing ethical and explainable AI.

Regardless of guidelines and frameworks for ethical AI an organization chooses, they do have one element in common: They emphasize using top-quality data for training the AI. Poor, incomplete, skewed and biased data is the root cause of AI getting a bad name. If we reduce data bias and launch the technology on an ethical backbone, we can create new, interesting and trustworthy futures for AI — one where AI does not confuse us or damage businesses or cause social distress.

The need for a common global language and unified standards is urgent. Without this, no number of guidelines will match the problems AI presents. But smart businesses know that they must forge ahead — fortune favors the brave? — making the best use of what is available.

Related Article: We Need Ethical Artificial Intelligence

As AI Investments Grow, the Need for Ethical AI Grows 

Businesses are betting big on AI. AI investments are forecasted to grow from $27.23 billion in 2019 to $266.92 billion by 2027. As investments increase, the need to give the technology a “moral compass” will grow more urgent. The National Institute of Standards and Technology (NIST) is trying to do this by establishing U.S. federal AI guidelines that improve trust in AI technologies. The NIST could also identify metrics to measure and monitor ethical AI, setting the pace for common standards.

The question every business harnessing AI asks, before it gets around to adopting a framework, is this: "I understand the need for unbiased data. I know the perils of algorithmic discrimination. I realize the urgency of replacing the AI black box with a transparent glass box and every decision taken by an AI should be explainable. How do I get there?"

Every business must begin by investing in legal resources, data sources, data sciences, analyzing industry and geo-specific regulatory requirements, conducting risk assessments, understanding social concerns and tapping into external technological expertise. These investments will result in creating auditable processes for accountability, documentation of why and how a data set is used, and an inventory of decision-making models. These mechanisms may not always prevent ethical failure, but businesses will be able to quickly identify lapses and take corrective action.

Related Article: AI and Enterprise Search: Who's In Control?

4 Considerations for Building More Responsible AI

Technology has a long way to go before AI can become fully autonomous. In the meantime, people should build AI systems with the following in mind:

  • Create the AI so the level of transparency can be adjusted for differing business areas and circumstances. In reality, the transparency setting blocks or unblocks certain algorithms.
  • The systems need to use models whose logic is fully understood (complete explainability is required). Before being deployed, the AI should be tested for bias or ethical violation in predicting, classifying or decisioning.
  • There should be a trustworthy mechanism that provides its stamp of approval for the AI, certifying that it meets internal business policies, industry/ governmental regulatory requirements and widely held social constructs for the geo in which the AI operates.
  • Above all, the single-most critical factor in designing an AI is obvious: It should always remain under the supervision of humans.

I cannot emphasize the need for human supervision enough. In April 2021, the European Commission made public the actions it was taking to "turn Europe into the global hub for trustworthy AI." It presents the first-ever legal framework and a plan for member states to bring about harmonized rules for AI. Among the most important points the proposal makes is that AI systems should be designed and developed in a manner that human oversight is guaranteed while in use. The regulations will apply to all providers and users of AI in the EU, regardless of where the provider is located.

The European Commission’s proposal will make high-risk AI systems subject to scrutiny before being placed in the market. Its provisions allow people to know when they are dealing with an AI system and help interpreting its output. It also proposes a large monetary fine for violation of norms.

Doubtless, the proposal will go through dramatic revisions before it sees the light of day. But every business looking to harness the power of AI must keep a close eye on it. It provides vital insights into addressing the challenges of ethics and transparency posed by AI.

Related Article: Make Responsible AI Part of Your Company's DNA