"We built AI to make our lives better," said Shannon Vallor, technology ethicist and professor in the Department of Philosophy at Santa Clara University, but using these systems without careful consideration can have "consequences we didn't expect."
Vallor was among the four panelists gathered in San Francisco last month to discuss, "The Ethics of Code: Exploring Diversity, Inclusion and the Future of AI Development." Newcastle, England-based Sage Group, the world's third largest ERP solutions provider, hosted the event.
Other panelists included Kriti Sharma, vice president of Bots and AI, from Sage; Amir Shevat, head of developer relations at San Francisco-based Slack; Deepti Yenireddy, co-founder of Seattle-based MyAlly. Tom Simonite from Wired moderated the discussion.
AI: 'Not Just a Tech Problem, a Social Problem'
"AI is learning from our interactions," noted Sharma. "Our AI assistant Pegg knows so much more than it did a year ago," she added, noting that there is still a skills gap when it comes to users' understanding around how to use it in their daily work and lives. "It's not just a tech problem, it's a social problem."
For now Sharma's company is creating AI to do things like automate expense reporting and other administrative tasks which some employees spend as much as 30 percent of their time on.
"That is time wasted," she said, adding that humans should spend their time on things that really matter, like meeting clients. Some 20,000 of Sage's end users use Pegg, the company's chatbot to do things like handle invoices, chase customers for payments, handle employee benefits information and such. These augment humans, make their jobs better and add to productivity, according to Sharma.
Shevat echoed that sentiment, likening AI to a new user interface (UI). He compared the growth in AI to previous progressions: the move from desktop to web applications like Salesforce, from web apps to mobile apps like Lyft for traveling from Point A to Point B. Now bots can simplify processes like approving vacations.
AI's Ethical Implications
But it's not all carefree said Shevat. Bots can become capable of doing some unexpected things. He told a story of trying to break one of Google's chatbots with a picture of a cat — image recognitions was not one of the bot's features. Rather than break, the bot said, "What an awesome cat."
"It (AI) can be helpful, but it can have ethical implications as well," he said.
That is something that Vallor thinks a great deal about, especially when it comes to robots. “We project agency on to all kinds of things: pets, robots," she said. She told the story of a motorized trash can that wandered around an office and how workers assumed it wanted them to find trash to feed it and what actions it was taking to get fed.
“People attribute mental states, desires and beliefs to things that don’t have them," said Vallor, which opens the door to vulnerability, manipulations and deceit.
AI is inherently hungry for data, so it wants to keep you engaged, according to Vallor. This can create problems, like people becoming more engaged with AI than with other humans.
Learning Opportunities
Imagine an artificial agent following you around saying, "I am lonely. I'm bored. Could you talk to me about movies you like?" The end result, in some situations, could yield time taken away from spouses, children and friends. And, of course, greater amounts of data for marketers who want to manipulate you to get your money.
"We need to think about the way we interact with these systems. We, without thinking, develop emotional and morally-laden relationships with artifacts. This will take-off very quickly and have consequences we didn't expect," said Vallor.
AI Mirrors Our Biases
The panel also noted these interfaces and systems weren't being designed, developed or trained by a diverse group of experts. Meaning sentiment and content analysis could potentially be non-existent for minority groups, including the elderly. After all, if a bot doesn't understand you, it can't take the right action for you, which potentially puts you at a disadvantage.
But that’s only one problem. Personalization, and even empathy, can cause others.
The panel discussed the pros and cons of letting users choose the sex, race, religion and other aspects of their bots. While the choice can seem empathic and helpful, it can also be horrid. Sharma painted an ugly, artificial, scenario in which female bots might be tasked with ordering lunches, scheduling meetings and keeping calendars while male bots made important decisions. Add race and religion into that scenario and it potentially gets even uglier.
Setting Boundaries for Our Artificial Assistants
At the end of the day it is up to humans to build and train AI and to set boundaries. “AI doesn’t need to be as intelligent as humans,” said Sharma. It can be taught to complete a task and no more, provided that humans do not change the boundaries. What this means, is that if you ask an artificial assistant to pay a bill, for example, you don’t engage when it makes a special offer for a new credit card.
“We as a society need to solve problems, not create new ones,” said Shevat then added, "If we can train humans to treat software better, then it can train humans to treat humans better."
It is a nice vision but we have a long way to go before we get anywhere near there.
Learn how you can join our contributor community.