A storefront with a neon sign above it that says code of ethical behavior - AI ethics concept
PHOTO: Nathan Dumlao/Unsplash

There are many reasons for companies to create ethics committees for their use and development of artificial intelligence (AI) — not the least being the fear of releasing a product that shows bias or could be used for discrimination

Yet setting up an ethics committee is a complex process that can be fraught with such questions that range from who will be on the committee, to whether the committee members’ backgrounds are acceptable, to what teeth the committee will have to enforce its decisions. Google, after all, tried and failed to set up its own ethics committee earlier this year. 

That doesn’t mean, though, that a company can’t have some kind of ethical oversight of its AI activities. Outside of setting up a formal committee, there are several actions a company can take instead. 

Assign at Least One Person to the Task 

This individual would be responsible for oversight in a senior leadership role, said Amber Bouchard, Maven Wave's talent acquisition manager. “With data privacy and other privacy issues, this individual would be tasked with putting up virtual walls.”

She suggested that the position could be a partnered role between human resources and the C-level team to take on the moral implications of tech advances. “This position would be neutral, an AI specialist or senior level tech position, with the individual detecting bias in algorithms delivered by AI and machine learning, Bouchard said. One way to think of the role is as a type of quality assurance, she added.

Related Article: Is It Time for Your Organization to Form an AI Ethics Committee?

Put it on the Board’s Agenda 

Companies that have formal board of directors should put AI ethics on the agenda of board meetings, said David Ciccarelli, co-founder and CEO of Voices.com. Start a discussion for how the company handles data, how it's processed, what's the innovation roadmap, he said. Voices.com does a variation of this, he said. “While we're not a leader in AI, we certainly employ best practices for handling big data sets and using the insights to improve our search algorithm and our matching algorithm on our freelancer marketplace and the experience for our customers.” 

Make it a Part of the IT Conversation 

It’s always advisable to have discussions around intended and potential use of larger AI projections or generic ones that can be reused by others, said James Cotton, international director of the data management center of excellence at Information Builders. “Any company should pay close attention to the data sets used to help train these AIs. Using any form of personal information warrants additional care to be taken. Teams should consider where data came from, how it was altered and determine when it is fit for purpose.” These suggestions are important not only from an ethical standpoint but are also just good business practices, Cotton said. “As the adoption of AI and machine learning becomes more pervasive, these technologies can only benefit businesses if the algorithms are fed with and taught by accurate data. Misinformed machines simply make skewed decisions or erroneous forecasts faster and at scale.” 

Related Article: 7 Ways Artificial Intelligence is Reinventing Human Resources

It’s Not Just AI But Also Data That Needs Governance 

Dan Wu, privacy counsel & legal engineer at Immuta, doubles down on the idea that data needs to be included in the conversation about ethics. “Before we can have ethical AI and ethical committees, we need agile and ethical data governance,” he said. “In the conversations about ethical AI, few are talking about how to govern this data well. By focusing on the right strategies, organization committees and technical stacks, data governance can accelerate — rather than hamper — an organization's ability to quickly develop safe, ethical AI.”