As the bulk of conversations move inexorably from face-to-face discussions to virtual meetings to online chat forums, we are faced with a slew of new challenges that threaten constructive and fulfilling human interactions. The members of the generation born into this world of digital interactions have been both beneficiaries and victims, with cyberbullying and its tragic consequences now coming to the fore.

Facebook, Instagram, Snapchat, Twitter and other platforms that have been the hosts to cyberbullying have been active in using artificial intelligence (AI) to combat online bullying.

The Annual Cyberbullying Survey 2017, conducted by an anti-bullying organization called Ditch the Label, found that Facebook and Instagram hosted the most instances of cyberbullying. It therefore came as no surprise when Instagram CEO Kevin Systrom announced an initiative to help stamp out cyberbullying. 

Using a Facebook tool called “Deep Text” (which Facebook describes as a “learning-based text understanding engine that can understand with near-human accuracy the textual content of several thousands [of] posts per second”), Instagram is flagging inappropriate content. 

In essence, this initiative is using AI text analysis to help “understand” textual statements in order to either block inappropriate content or, in Facebook’s case, “enhance” online experiences — for example, if you start chatting with your sister about wanting to sell your old sofa, Facebook will link you to the Facebook marketplace.

How Can Business Benefit From AI in Social Platforms?

As is the case in everyday life, business conversations inside the enterprise are also moving onto digital platforms, such as Yammer, Teams, Slack, Workplace and the like. And just like the consumer platforms, these platforms can also benefit from the application of AI-driven text and sentiment analysis. For example, we have been able to develop rich profiles of individuals’ online behaviors using these platforms.

Individuals can use the insights from such profiles for self improvement, and business leaders can use them to make work environments more conducive to constructive and collaborative behaviors. For example, if my online profile classified me as a “broadcaster” — someone who makes many online statements that attract little, if any, reactions — I would be thinking about how I could change my posting behavior to become more collaborative or engaging. As a leader, if I was to see a disproportionate percentage of broadcasters in my group, I might consider carrying out interventions that could promote more interactions or conversations. For example, I could open conversations with challenges or questions, rather than statements.

Related Article: Can Artificial Intelligence Weed Out Unconscious Bias?

Using AI to Become a Better Negotiator

A lot of what we do in business is negotiate. Whether it is with customers or suppliers or internal colleagues, we are always looking for ways to better influence people to see things our way. If you took an introductory course on negotiations, you likely learned about the “red and blue” game, which is designed to help participants explore the concepts and effects of trust and betrayal during negotiations. It’s interesting to think about business negotiations in the context of cyberbullying, because when businesspeople are involved in work negotiations, they may sometimes feel as though bullying is taking place (thankfully not in our organization) and the way to deal with such behavior is the same in both cases: sometimes you apply more assertive (red) behaviors and at other times more conciliatory (blue) behaviors.

Learning Opportunities

When we look at an online discussion thread on an important topic, we could detect a red or blue positioning using AI sentiment analysis of what is actually said. You may decide to review some of your own online discussions with the help of an AI engine in an effort to assess whether you could have negotiated more effectively. Or when you are in a live online discussion, you might try to reflect on what your “next move” should be based on advice from an AI engine monitoring the sentiment of your discussions.

Related Article: Exploring the Ethical and Social Implications of AI

An Example of How AI May Help a Negotiation

Sometimes, you may not even need the AI engine to offer recommendations: Just seeing the sentiment classifications can provide you with enough feedback to make decisions on your own. To demonstrate how this might work, analysts at my firm took the transcript from a “negotiation” conducted between two national leaders around an immigration issue and fed it into an AI sentiment analysis engine. We do not expose the actual content of the conversation, only the sentiment. We color positive statements as green, negative ones as red and neutral statements as gray. We then provide a commentary on what an AI engine might interpret through learning from large suites of similar online discussion patterns, as illustrated in this diagram:

AI dialogue recognize bullying

As we can see from the above, Person B has engaged in sustained “red” behavior, whereas Person A has largely remained positive or neutral. In this case, this tactic does turn the behavior of Person B into one of acceptance, and therefore yields a successful negotiation. If you are interested in the actual context of this discussion, you can read about it on our website.

Many business innovations come about because a need and a response are identified through challenges faced by society as a whole, in this case cyberbullying. These solutions can have many positive outcomes for business, if we choose to pursue them effectively.

fa-solid fa-hand-paper Learn how you can join our contributor community.