Information management professionals focus on realizing the promise waiting in our information, to leverage our data and to help us blaze a trail to transformation. But the arrival of accessible platforms, such as social media, makes it easier than ever to inundate the information landscape with misinformation. Policy makers around the world are calling for coordinated, collective responses to stem the flow of dangerous myths. The call is growing — and it portends changes in how we manage content.
The High Cost of Misinformation
Misinformation costs lives. At the time of writing, roughly 2,000 people a week are dying from COVID-19 in the U.S. alone. Ninety-nine percent of deaths today are people who did not get vaccinated. A renewed commitment to expanding vaccine access will be crucial to prevent avoidable suffering. Some cannot access vaccines because of economic or logistical circumstances, but 24% of U.S. adult respondents in a Gallup poll said they do not plan to get vaccinated.
Misinformation also wrecks economies and thwarts responses to disasters and war. It’s not just that the information is wrong, but how deeply and quickly it spreads. A study by MIT scholars found “false news spreads more rapidly on the social network Twitter than real news does — and by a substantial margin.” Misinformation is an unrelenting loudmouth, causing confusion among the public on already-complex issues.
Experts like Stanford University's Blair Bigham, who studies the spread of misinformation, joined the growing call to control the “misinfodemic” as a dangerous and even deadly force. A map of misinformation spreading on Twitter looks like the maps used to track cholera or meningitis outbreaks, wrote Bigham.
Readers don’t know who to trust. It takes time to build trust. The responsibility for controlling the spread of misinformation resides with a number of key players, from social media giants to the consumers themselves:
- Social media platforms: Just a few social media accounts are responsible for an overwhelming majority of misinformation. Governments and public health agencies are telling social media giants to be more proactive and conduct surveillance for misleading posts and filter them. Councils around the world are discussing the ethics of algorithms that drive much of the social media noise.
- Subject matter experts: Scientists, doctors and thought leaders can craft clear messages and advise the public in how to ‘prebunk’ for bad information coming their way.
- Consumers: Media experts and social scientists are advising public consumers to limit their exposure to media, to be more discerning about what they’re consuming and reflect on its potential value or damage.
Related Article: The Risks and Consequences of Information Mismanagement
Implications for Content Management Professionals
Information and records managers are already used to profiling their critical content for trust. We have practices and systems to measure quality, to demonstrate compliance with privacy and copyright laws and to mark files for disposition at the end of their lifecycle. We’re already applying external regulations to our content, just as there are government food safety regulators looking out for the public good in partnership with the food producers themselves.
- Misinformation’s potential for serious harm is real and the calls for remedies are growing. If content managers along with other leaders can do something to help, let’s engage in the discussion to find the right balance between open discourse and censorship.
- The truthfulness of content within our stewardship is a dimension of quality. We’ll need to agree on where the information-vs.-misinformation checkpoints will be and who will govern them. Will the tech giants apply more rigorous and self-disciplined surveillance on information before enterprises capture it? To what extent will companies need to evaluate and tag the truthfulness of the information of which they are custodians?
- For those who argue that identifying and controlling misinformation is a threat to their freedom of expression, that freedom is already measured by its impact on the public welfare even on privately owned platforms, for example, hate speech, incitement to violence. If safety inspectors are welcome for our water and groceries, why not oversight for our information safety? It’s part of the ongoing transformation in how we think of information: from "my stuff" to "our collective assets," and from something abstract and benign to something concrete and powerful. Response to misinformation will need to be collaborative and organized.
We will watch how social media companies respond to calls to monitor and expose misinformation. Some of the sources of misinformation are easy for them to spot, such as hate groups. But much of the content is coming from us: we are all broadcasters in this digital assembly. Like the data quality mapping that information professionals draw to trace all the points in its journey where trust and quality become compromised, there are multiple checkpoints and responsibilities in the solution. Will a social media company be responsible for fact-checking my doctor’s credentials?
Related Article: Information Governance Is Boring, But Necessary
A Framework for Managing Misinformation
Clear policies and definitions of who is responsible for checking and acting on misinformation should answer these questions: Who is in the best position to identify misinformation, e.g., should tech giants hire third party fact-checkers? What are the triggers for a response by social media platforms, e.g., when a story goes viral and draws complaints? Content managers are familiar with these questions and the importance of agreeing on a framework where each of us manages digital debris in ways that make sense.