Twitter recently announced that it will no longer enforce its ban on "COVID-19 misinformation," a fuzzily defined category that ranged from demonstrably false assertions of fact to arguably or verifiably true statements that were deemed "misleading" or contrary to "guidance from authoritative sources of global and local public health information." The change in policy, which users first noticed about a month after Elon Musk completed his acquisition of the company, is consistent with his avowed commitment to lighter moderation and more free-wheeling debate. But according to The Washington Post, "experts" are warning that "the move could have serious consequences in the midst of a still-deadly pandemic."
That fear has always been the justification for restricting what people can say about COVID-19 on social media, and it is by no means groundless. If false claims deter medically vulnerable people from getting vaccinated or encourage the use of ineffective and potentially dangerous treatments, for example, the consequences could indeed be serious. But policies like the one Musk has ditched present two intersecting problems: Misinformation is an inherently nebulous concept, and private efforts to suppress it on any given platform are strongly influenced by government pressure.
Legal restrictions on COVID-related speech that go beyond recognized exceptions such as fraud and defamation would be plainly unconstitutional. But government officials can achieve similar results by publicly and privately demanding that social media companies do more to curtail the spread of "misinformation."
Those demands involve not only more vigorous enforcement of existing rules but also expanded definitions of unacceptable speech. Platforms like Twitter, Facebook, and YouTube have a strong incentive to comply with those "requests," given all the ways that dissatisfied officials can make life difficult for them through castigation, regulation, litigation, and legislation. The upshot, to the extent that companies adopt stricter moderation practices than they otherwise would, is censorship by proxy.
That is exactly what we are seeing, according to a First Amendment lawsuit that Louisiana Attorney General Jeff Landry and Missouri Attorney General Eric Schmitt filed last May. Discovery in that case has revealed emails showing how keen executives at social media companies were to placate federal officials by suppressing speech they viewed as a threat to public health.
Twitter seems to have had an especially cozy relationship with the government's misinformation hunters. "I'm looking forward to setting up regular chats," said an April 8, 2021, message from Twitter to the Centers for Disease Control and Prevention (CDC). "My team has asked for examples of problematic content so we can examine trends."
Twitter responded swiftly to the government's censorship suggestions. "Thanks so much for this," a Twitter official said in an April 16, 2021, email to the CDC. "We actioned (by labeling or removing) the Tweets in violation of our Rules." The message, which was headed "Request for problem accounts," was signed with "warmest" regards.
That same day, Deputy Assistant to the President Rob Flaherty sent colleagues an email about a "Twitter VaccineMisinfo Briefing" on Zoom. Flaherty said Twitter would inform "White House staff" about "the tangible effects seen from recent policy changes, what interventions are currently being implemented in addition to previous policy changes, and ways the White House (and our COVID experts) can partner in product work."
Facebook likewise was eager to fall in line, especially after President Joe Biden accused the platform of "killing people" by allowing the spread of anti-vaccination messages. Such criticism was coupled with praise for companies that did what the Biden administration wanted. "In an advisory to technology platforms," CNN notes, "US Surgeon General Dr. Vivek Murthy cited Twitter's rules as an example of what companies should do to combat misinformation."
That July 2021 advisory, published the day before Biden charged Facebook with homicide, called for a "whole-of-society" effort, possibly including "legal and regulatory measures," to combat the "urgent threat to public health" posed by "health misinformation." Murthy's definition of "misinformation" was alarmingly broad and subjective.
"Defining 'misinformation' is a challenging task, and any definition has limitations," the surgeon general wrote. "One key issue is whether there can be an objective benchmark for whether something qualifies as misinformation. Some researchers argue that for something to be considered misinformation, it has to go against 'scientific consensus.' Others consider misinformation to be information that is contrary to the 'best available evidence.' Both approaches recognize that what counts as misinformation can change over time with new evidence and scientific consensus. This Advisory prefers the 'best available evidence' benchmark since claims can be highly misleading and harmful even if the science on an issue isn't yet settled."
Twitter's now-rescinded policy, which Murthy cited as a model, likewise deferred to the officially recognized consensus. "We have broadened our definition of harm to address content that goes directly against guidance from authoritative sources of global and local public health information," the company said. "We are enforcing this in close coordination with trusted partners, including public health authorities and governments, and continue to use and consult with information from those sources when reviewing content."
Under that test, any expression of dissent from official advice could be deemed misinformation. Twitter explicitly forbade "statements which are intended to influence others to violate recommended COVID-19 related guidance from global or local health authorities to decrease someone's likelihood of exposure to COVID-19." It specifically mentioned advice about masking and social distancing, both of which raise scientifically and politically contentious issues, especially when "guidance" inspires legal mandates.
Is questioning the benefits of general masking misinformation? (Yes, according to Twitter.) What about conceding the effectiveness of properly worn N95s while describing commonly used cloth masks as worthless? (Also misinformation, according to YouTube.)
If contradicting official advice is the criterion, questioning the scientific basis for requiring masks in schools or for maintaining a specific distance from other people likewise could count as misinformation. Even arguing that the costs of lockdowns outweighed their benefits might qualify, since those policies were aimed at enforcing social distancing.
Reasonable, well-informed people can and do disagree about such issues, based on different assessments of the "best available evidence," which Murthy says is the key to distinguishing between misinformation and acceptable speech. And whatever the correct take might be today, it could be different tomorrow. As Murthy concedes, "what counts as misinformation can change over time with new evidence and scientific consensus."
Over the course of the COVID-19 pandemic, the conventional wisdom on subjects such as the utility of cloth face masks, the right distance for social distancing, isolation periods, intubation of patients, and the effectiveness of vaccines in preventing virus transmission has shifted repeatedly in response to emerging evidence. That process would be impossible if every deviation from the "scientific consensus" were deemed intolerable.
Fortunately, neither Twitter nor any other company has the power to enforce such conformity across the country. Thanks to the First Amendment, the government does not have that power either. The Biden administration's ham-handed crusade against COVID-19 "misinformation" nevertheless aims to reduce the diversity of opinions that people can express on major social media platforms, encouraging policies that equate all skeptics and dissenters with crackpots and charlatans. Whatever you might think about other changes Musk has brought to Twitter, his refusal to participate in that scheme is a hopeful sign.
Start your day with Reason. Get a daily brief of the most important stories and trends every weekday morning when you subscribe to Reason Roundup.