As we said far too often back in college, “It all seemed like a good idea at the time.”
Consider these inspiring mission statements:
"Facebook's mission is to give people the power to build community and bring the world closer together. People use Facebook to stay connected with friends and family, to discover what's going on in the world, and to share and express what matters to them."
"The mission we serve as Twitter, Inc. is to give everyone the power to create and share ideas and information instantly without barriers. Our business and revenue will always follow that mission in ways that improve — and do not detract from — a free and global conversation."
"Our mission [at YouTube/Google] is to give everyone a voice and show them the world. We believe that everyone deserves to have a voice, and that the world is a better place when we listen, share and build community through our stories."
When Ethics and Mission Statements Collide
According to Pew Research, 42 percent of Facebook end users have now reportedly taken a break from the site in the past year. Evan Oznos in the New Yorker notes, "Zuckerberg is now at the center of a full-fledged debate about the moral character of Silicon Valley and the conscience of its leaders." And Facebook is not alone.
The lofty ideas of peace, love and understanding promulgated by Facebook, Twitter, and YouTube/Google have clearly collided with three unanticipated forces:
- Web business models based on ever-more niched and segmented click-based advertising.
- A regulatory infrastructure (Section 230 of the Communications Decency Act of 1996) that says, "No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider."
- Nefarious players on both the left and the right, augmented by nation-state resources, determined to exploit No. 1 and No. 2.
All of which came together in early September in the Dirksen Senate Office Building for eight hours of testimony from Facebook’s COO Sheryl Sandberg, Twitter’s CEO Jack Dorsey, and an empty chair representing Google.
It's Getting More Complicated
If there ever was an area in which we are experiencing the law of unintended consequences — outcomes that are not the ones foreseen and intended by purposeful action — it’s in the social media arena. The warning signs are all around on both the right and the left that things haven’t quite turned out the way we thought, and the resolution will be much trickier than we think.
Earlier this year, Facebook banned Alex Jones, an unsavory character who was the source of Holocaust, 9/11 and Sandy Hook denial stories. The banning triggered a spate of stories on the right about the potential ability of the major social media platforms to curb free speech, and who ultimately should possess this power. Per Oznos's article quoted earlier, "Zuckerberg’s most intractable problem may lie elsewhere — in the struggle over which opinions can appear on Facebook, which cannot, and who gets to decide."
Almost to prove the point, after Zina Bash tried to signal an “OK” sign during the Kavanaugh hearings, a tweet from the left — from #Resistance figure Eugene Gu —interpreted the gesture as signaling support for white supremacy. It generated 15,000 retweets and millions of YouTube views of a Zapruder-like frame-by-frame gesture review in a matter of moments.
Potential Solutions and Their Concomitant Landmines
So where do we go from here? Unfortunately, most of the paths available have a number of landmines. Here are some suggestions that have been made:
- Social media platforms should have the same kind of accountability for items posted on their platforms that media organizations do (i.e., do away with the Section 230 exemption). But given the current monopolistic power of these platforms, what would be the implications for free speech?
- Some sort of government oversight should be required on what is “responsible” and what is not. That’s fine, so long as you can guarantee that crazies won’t get control of this oversight group. (Incidentally, like everyone else, I tend to define “crazies” as anyone with whom I personally disagree.)
- We need “more laws” to “regulate” either the social media platforms or renegade players who abuse the platforms or both .... But if the Facebook and Facebook/Twitter/Google hearings have taught us anything, it’s that the level of technical knowledge lawmakers have about exactly how the internet and social media work is not encouraging for regulation that wouldn’t actually make the problem worse.
- Still others say the core of problem rests with the financial model underpinning the web — niched click-based advertising — and that we need to “fix” that. That genie should certainly be easy to put back into the bottle.
Related Article: Marketers, Data Collection and the E-Word: Ethics
Next Steps for the Rest of Us
Meanwhile, organizations seeking to do something as pedestrian as using Facebook or Twitter or YouTube to sell stuff and engage their customers need to figure out how to sort their way through all of this looming controversy. A good starting point is to make sure their own privacy and information security houses are in order.
For example, right now, many state legislatures are proposing “sons and daughters of GDPR” measures in an effort to address the vacuum of policy about information privacy and security. The National Council of State Legislatures (NCSL) notes, "Privacy issues are a growing concern of Americans, especially as the Internet and technology have made personal information more accessible and easier to collect, access and repurpose or manipulate."
Hundreds of state laws were considered and/or enacted in the past year related to such issues as internet privacy, internet service providers, employer and school access to social media usernames and passwords, cybersecurity, computer crime, and a host of others — the full list can be found on the NCSL website.
We need a better way of understanding and anticipating these kinds of technology issues, and not just from the perspective of technology providers. It’s time for technology USERS to make their needs and voices and concerns heard in a more systematic way.