When you are cooking a special meal, you always want to use the best ingredients to make it great. When you are thinking of social media, the best ingredient for social media analytics is a healthy audience. After all, an audience's activity on a social media profile creates metrics from which marketers can learn how to best connect with people. A safe environment is one where permission-based marketing is accepted, enhancing brand safety as well.
But when people feel unsafe from trolls, more than just metrics can be at risk. Social media platforms have begun adjusting their features to discourage abusive trolling, making it safer for influencers to avoid harassment and to increase the overall quality of mental health protection among users.
YouTube removed "dislike" counts from hosted videos and livestreams on the platform as a beta experiment — in August it launched the eliminated counts feature across all accounts. The removal highlights social media’s attempt to protect the mental health of its users as well as keeping brand safety in check.
Related Article: Mastering Brand Reputation Management in the Social Media Age
How YouTube’s Move to Remove the ‘Dislike' Count Counts
Viewers of a given video could indicate a dislike for a given video by clicking the thumbs-down icon. The dislike count once appeared alongside the icon. Now only the dislike icon is shown, part of a refreshed appearance of the video player appearance. The "like" count and associated thumbs-up icon remain. Removing dislike counts from public view was designed to discourage trolls from inflating the dislike number and targeting individuals.
Trolls have sought many ways to abuse people online. Usually this meant rude commentary, which can be reported on a given social media platform. But new aggressive behaviors have emerged as social media platforms added features. On YouTube trolls hit the dislike button on videos hosted by their targets. Hitting it repeatedly increases the dislike count, building a negative perception of a given video. Likes have become social currency, a sign of support from many people, so the dislikes are meant to humiliate their targets where possible.
Related Article: Is Social Media Ruining Our Lives?
When ‘Likes’ and Lives Are at Stake
So, while social media algorithms view downvotes as a factor to demote a video or post, social media platforms are realizing it can also be a sign of harassment. As a response, the social media platforms decided to reevaluate how some features can be misused to create an air of harassment.
Over the years various tech insiders have commented on the need to refine social media metrics with respect to mental health. Evan Williams, co-founder of Twitter, noted the need for better metrics back in 2012. Back in 2015, Mark Zuckerberg, CEO and founder of Meta, announced a consideration of removing likes (I explained some of the rationale in my post on sentiment analysis).
But these mere observations were back in 2012. The impact of social media on mental health was not even considered let alone well-researched. Social media was just too new. Today people are questioning internet behavior that contributes to poor mental health.
Bad human behavior has not changed with the enlightened understanding of the negative consequences social media can have on mental health. Back in 2017, a Pew Research study reported that 4 in 10 surveyed adults indicated that they had been harassed online. And 66% indicated they had witnessed harassment. This was a slight increase since Pew's previous study in 2014. Fast forward to 2020, Pew found that while rates were similar, the intensity of harassment had increased with 41% claiming they had been harassed and 25% experiencing more extreme harassment.
Related Article: How Influencers Help Build a Better Customer Experience
What Social Media Is Doing to Protect Mental Health
Like YouTube, social media platforms have conducted experimental trials of features. During 2021 Instagram introduced a Hidden Like count for its posts. Users could still dislike a post, yet not see the number of dislikes.
Back in February, Twitter experimented with downvotes, a variation of the YouTube dislike button. Users of the Twitter app can downvote replies to tweets. Like the YouTube and Instagram examples, the downvote counts are not visible to users — but they are also not visible to tweet authors. Twitter uses the downvotes to determine how relevant a tweet is. In May, it decided to roll out the downvote feature to website users, with app users adopting it later.
More researchers are examining main feeds to better identify mental health traits and risks. Studies have indicated people who spend frequent time on social media and less interpersonal activity increase the risk of mental health conditions such as depression. Platforms must address mental health when launching a feature, facing a competing interest of retaining an audience and better maintaining safety.
More harassment prevention features are being introduced to better align with social media’s societal influence. For example, Adam Mosseri, head of Instagram announced the expansion of parental controls on Instagram. Instagram’s parent company Meta later expanded parental control features to Facebook. The controls will limit who can see young users’ friends lists and the pages they follow. They position abuse prevention measures at the core of social media usage among teens.
Brands with beauty and fashion offerings are also taking action to ensure their marketing activity does not endorse bad behavior. Ogilvy announced it will not work with influencers who edit their body images or faces in ads. Two ultra-luxury brands, Lush and Bottega Veneta, have even eliminated social media presence, despite the fact that their fashion brands have a presence to leverage image and video.
As a result, the developer notion of “move fast and break things” that has underpinned software product design — including user feature development in social media — is evolving to incorporate more psychology and health science insights into how the customer experience is delivered. Moreover leaders at the largest social media platforms are realizing they can not sit on top of negative information, such as data scientist Frances Haugen who testified to the US Congress and UK Parliament on decisions within Facebook regarding the emerging study of the mental health impact of social media.
Social media platforms have increased their accounting of the human response to feature designs. They still have more to do, particularly as regulations are being considered, so marketers must keep abreast of feature changes as a brand safety issue as well. The change in tone is timely as the public is recognizing how technology imbued into products and services they use is impacting key decisions and their well-being.