A man in a tree, cutting through the branch he is sitting on
PHOTO: Shutterstock

Salesforce CEO Marc Benioff won mountains of positive attention this month when he appointed a chief ethical and humane use officer, Paula Goldman. Her mission? "To develop a strategic framework for the ethical and humane use of technology across Salesforce," according to a company press release. The news probably didn't shock anyone, given that Benioff had been on a media tour earlier in the month promoting the idea of the government stepping in and regulating technology companies like Facebook.

"Facebook is the new cigarette, It is not good for you. It's addictive. You don't know who's trying to convince you to use it or misuse it...," Benioff said during an interview with Laurie Segall on CNN.

Some might infer from this that Benioff would not recommend leveraging Facebook’s data to market products. Yet, if you look on Salesforce's website, you will find a video titled, "Acquiring and cultivating customers using Facebook lead ads and Salesforce." The intention behind mentioning this is not to call out Salesforce, but to point out that even marketers like Benioff, who seem to care deeply and passionately about ethical practices, can fall into traps without meaning to. And that is without applied artificial intelligence (AI), machine learning (ML) and bots going rogue.

Benioff said that because we, as consumers, workers and managers, may not be able to distinguish whether we are talking to technology or a human being, we must require that technology be designed to identify itself as technology when it communicates with humans.

Related Article: Exploring the Ethical and Social Implications of AI

Defining the Ethical and Humane Use of Technology is a Tall Order

While Goldman is slated to define what the ethical and humane use of technology means for Salesforce, other businesses, organizations and individuals will need to ponder this as well. It is a tall and important order.

Consider the warning issued by former Google design ethicist Tristan Harris, during a Ted Talk last year. "Never before in history have such a small number of designers — a handful of young, mostly male engineers, living in the Bay Area, working at a handful of tech companies — had such a large influence on 2 billion people's thoughts and choices." Harris now leads The Center for Humane Technology and is the co-founder of the Time Well Spent movement.

The small number of designers that Harris referred to likely include employees of Amazon, Apple, Facebook, Google, Microsoft and Twitter among others. These tech firms would be far less successful if marketers and advertisers were not spending dollars to leverage their data, algorithms, AI and ML. This leads to an interesting question, Do brands care about the ethical practices of their business partners, suppliers and customers? If so, what does that care look like in practice? Here is what the experts have to say:

JT Kostman, PhD, leader, applied AI and advanced technology at consultancy Grant Thorton LLP

The short answer: "Brands that truly care about their brand also care about how their partners conduct themselves. Brands that just want to turn a quick buck don't."

The longer answer: It wouldn't be fair to paint all brands with the same brush. While some will obviously be willing to compromise ethical concerns for the quick win, those decisions invariably come back to haunt them. Consumers have become too savvy, too discriminating, and have too many choices to have to tolerate nefarious practices. Brands that are more interested in lifetime value and customer retention, not ephemeral quick-hits, treat their customers with the respect they deserve."

Carsten Thoma, Investor, co-founder and past president (SAP) hybris

"[Brands should look to] maximize the use of data (with permission) while protecting identity to the max. That is the best of both worlds, a technology that can meter, visualize, continuously adopt and commercialize the relationship between any type of entity (human or goods), and offer multiple identity protection. Unfortunately, sometimes the ethical behavior of companies decreases almost linearly to the spread of operating margins."

Larry "LK" Kihlstadius, Chairman and president of Vistage Worldwide, Inc., a peer CEO advisory organization with over 22,000 senior executive members

"...just like everything in life, it's a bell curve. The CEOs in my Vistage groups care deeply about B2B marketing ethics as well as B2C. I find you have you ask very deliberate and clear questions on exactly how the data is collected, how rights are given or not given, and what is the long-term utilization of the data.”

Ray Wang, principal analyst and co-founder of Constellation Research

"Organizations need to define corporate ethics (looking through the lens of AI). There are five design points to include in a strategy. First, you need transparency in algorithms, attributes and correlations. They should be available to all participants. Next, things need to be explainable, we need to be able to know how contextual decisions are reached. What if the model shows an unconscious bias for, say someone who has purple hair? Third, decisions need to be reversible. Four, systems need to be trainable, because, for example, while 95 percent accuracy for manufacturing might be fine, for healthcare 95 percent is a disaster. Finally, the process must be human-led, not machine led."