A camera sentry monitoring
PHOTO: Shutterstock

Do you need to set up an artificial intelligence ethics committee if you are using this technology? Google certainly thought it did — until it changed its mind. Of course Google is one of the leaders in this space while most other companies on the spectrum are merely experimenting with AI or using a variation of it in a vendor product. Still, though, artificial technology is quite different from other technologies and software applications given its ability to think and reason like a human. It is not understated to say there are ethical considerations with its use — even with seemingly benign business operations. Indeed, Deloitte's second annual State of AI in the Enterprise survey found that 32% of executives ranked ethical issues as a top three risk of AI, but most don't yet have specific approaches in place to address this risk.

Google’s Brief Flirtation With an Ethics Committee

Yet it appears that nothing with AI is easy, including establishing an ethics committee as Google found out recently. At the end of March Google announced it had established an AI ethics panel to guide the “responsible development of AI” at the company. The panel was to have eight members and would meet four times over 2019 to consider various concerns about Google’s AI endeavors. The panel lasted just a little over a week.

From the beginning it was controversial, with thousands of Google employees calling for the removal of Kay Coles James, head of the Heritage Foundation, because the institution has voiced skepticism about climate change and because of her comments about trans people. Other members’ credentials or beliefs were similarly challenged, with one member resigning and another Tweeting about James that “Believe it or not, I know worse about one of the other people.” Soon enough Google pulled the plug, declaring it was going back to the drawing board.

With this fiasco as background, it is fair for companies to wonder if an AI ethics committee or panel is for them after all. There is no one resounding consensus on the matter though and not surprisingly opinions vary from ‘yes you do’ to ‘no you don’t’ and all points in between.

Related Article: 7 Ways Artificial Intelligence is Reinventing Human Resources

Yes You Need One

Manoj Saxena, advisor to the London Stock Exchange, first GM of IBM Watson and currently executive chairman of CognitiveScale, is resolute that companies need a backup in this area, especially those companies that will be adopting AI to build solutions. “Unlike traditional rules-based systems, AI systems are self-learning systems that need to be designed carefully so they reflect the company’s core values, comply with industry regulations, provide audit trails on how the AI is learned and finally, act as a means of remediation for AI damages or harm,” he said.

Even companies that are just beginning on their AI journey should be thinking about this, according to B12 co-founder and CEO Nitesh Banta. “With technology as powerful as AI, this is particularly true. There's so much unknown about the future of AI and it has the potential to both positively and negatively impact all aspects of society.” Companies should not only talk about the implications internally but should look for opportunities to learn from others, he added.

Perhaps what confuses companies is the fact that these discussions usually start at the societal level — such as debates over whether the technology should be sold to authoritarian regimes or whether robots will replace human jobs. Simple put these are not issues at the adopter level, said Doug Barbin, principal and Cybersecurity and Emerging Technologies Practice leader of Schellman & Company.

“Adopters of AI need to consider the sources and uses for AI technologies as they should with any other,” Barbin said. “For example, users of ML technology need to understand the quality, quantity, and especially the limitations of source data. As such, the old saying of garbage in garbage out applies especially when business decisions are made based on the outputs of the ML technology.” Some questions to consider, he said, include:

  • Does the ML technology take data from one source like a sales system or does it take from multiple sources?
  • Are there any glaring omissions like customer satisfaction or retention?
  • Are geographic, demographic, market, or other factors  accounted for or do the results tilt positively or negatively towards a  specific segment"

“And when dealing in personal data, a whole additional host of issues come into play. In some cases, systematic actions are applied based on the analysis that occurs,” Barbin said.

There is also, increasingly, a liability factor to consider. Neil Lustig, CEO of GAN Integrity, recommends that not only should an AI ethics committee be established, but that it be made up of at least one independent non-executive director to ensure the shareholders are represented when vetting systems. “In the case where there has been an ethics breach due to decisions made by an AI system, companies need to be able to prove that the board had gone through an established procedure and done its due diligence to vet the AI,” he said.

“In the case where companies have a procedure in place for vetting the AI system, even if there has been a breach, they will be still known as an ethical company and the blame will be put on what we like to call the bad actor — the creators of the AI or in the employee who circumvented the vetting process for using a new AI system.”

No, You Don’t Need One, at Least Not Yet

As Google’s employees showed, there is a healthy amount of skepticism about AI ethics panels even among otherwise sympathetic audiences. Who will sit on this committee and what is that person’s background are common questions. Another question that Google didn’t fully answer is how much power will these committees have to veto projects. “Ethics committees are only as good as the bite they have in them,” said Brad Westveld, co-founder and partner with executive search firm ON Partners.

And many in the industry believe that it simply isn’t necessary right now. “The industry barely has standards, there is not a clear or dominant leader in the space and its uses are endless,” said Westveld. “In the days of the wireless explosion, we had standards like WiMax, Zigbee and Bluetooth, and we needed to wait for one winner before the space could really get its hands around rules and regulations.”

All said and done AI may become all pervasive soon; but that isn’t the case right now, said Chris Shuptrine, Adzerk VP of Marketing. For all the paeans being sung in its praise, we are still at a stage, where most organizations are figuring out how to best utilize AI.”