Free Speech

The Biden Administration Is Pushing Social Media Platforms To Expand Their Definition of Intolerable COVID-19 'Misinformation'

Online censorship by proxy undermines the ordinary process for checking claims and counterclaims.

|


A New York Times story about the "rift" between Facebook and the Biden administration regarding COVID-19 "misinformation" illustrates the fuzziness of that category and the perils of suppressing it at the government's behest. While administration officials often claim they are just encouraging the social media platform to enforce its own rules, their idea of misinformation is not necessarily the same as Facebook's, and that cleavage shows that the government is imposing online censorship by proxy, pushing to expand the definition of intolerable speech.

"We've engaged with Facebook since the transition on this issue," White House spokesman Mike Gwin tells the Times, "and we've made clear to them when they haven't lived up to our, or their own, standards and have actively elevated content on their platforms that misleads the American people." Since the Biden administration has the power to make life difficult for social media companies by pursuing litigation, writing regulations, and supporting new legislation, Facebook et al. have a strong incentive to follow the government's "standards" rather than its own.

"Facebook told White House officials that it grappled with content that wasn't explicitly false, such as posts that cast doubt about vaccines but don't clearly violate the social network's rules on health misinformation," the Times says. "Facebook allows people to express their experiences with vaccines, such as pain or side effects after receiving a shot, as long as they don't explicitly endorse falsehoods."

D.J. Patil, the chief technology officer for Biden's transition team, had no patience with that distinction. "Seriously?" Patil texted "the Biden team" during one video call. "We have to get past the talking points. People are literally dying." That conviction culminated in Biden's July 16 charge that Facebook et al. are "killing people" by failing to police speech the way he thinks they should.

Surgeon General Vivek Murthy, in his July 15 advisory calling for a "whole-of-society" effort to combat the "urgent threat to public health" posed by "health misinformation," made it clear that the administration expects social media platforms to suppress statements that it deems "misleading," even when they might be true. "Claims can be highly misleading and harmful even if the science on an issue isn't yet settled," he said.

If a Facebook user says we don't yet have data on the long-term side effects of COVID-19 vaccines, for example, that would be true but unhelpful and therefore probably would count as "misleading" in Murthy's book. Likewise if someone emphasizes that the vaccines have not yet been fully approved by the Food and Drug Administration.

Times story about "virus misinformation" further illustrates the point. According to Zignal Labs, which provided the data underlying the paper's report that "coronavirus misinformation has spiked online in recent weeks," that category includes claims that "vaccines don't work," that "they contain microchips," and that they "cause miscarriages." Fair enough. But Zignal Labs also counts as "misinformation" the argument that "people should rely on their 'natural immunity' instead of getting vaccinated," which may be bad advice but is not a statement of fact.

According to a report from Media Matters for America, the Times notes, "many of the most popular posts" in Facebook groups "dedicated to antivaccine discussion" did not involve "explicit falsehoods." One example: "an image of a Scooby Doo character [Fred Jones] unmasking a ghost" labeled "delta variant." Fred says, "Let's see what makes you scarier than all the other variants." When he removes the mask, he reveals "the logos of MSNBC and CNN, implying that the cable channels were overstating the severity of the Delta variant."

Such media criticism not only does not qualify as an "explicit falsehood"; it is arguably accurate, even according to the Biden administration. But according to the Times, it is still problematic:

Like the comments on many of the other pages, those beneath the Scooby Doo item did contain unfounded claims. They also included calls to violence.

"China is completely to blame," one comment said. "We're going to have to fight them eventually, so I advocate a preemptive nuclear strike."

The Times seems to be suggesting that some blowhard on Facebook is inciting violence by calling for war with China, which implies that his comments might not be constitutionally protected. The paper also is suggesting that expressions of opinion should be deleted, even when they are not demonstrably wrong, if they might encourage other people to make "unfounded claims." Yet all of this speech, including verifiably false claims about COVID-19, is indisputably protected by the First Amendment. While social media companies like Facebook are not constrained by the First Amendment and have a right to moderate posts however they want, the situation changes when those decisions are shaped by the federal government's demands.

Even when the "misinformation" net is not cast widely enough to encompass media criticism and foreign policy commentary, there are many opportunities for errant determinations of truth. An episode that Reason's Robby Soave recently described shows how even well-intentioned fact checkers can err, promoting misinformation in their zeal to identify it.

Politifact rated as "false" an Instagram post by someone who said "my immune system's effectiveness" against COVID-19 is "99.98%." But that statement is consistent with the Center for Disease Control and Prevention's "current best estimate" of the COVID-19 infection fatality rate (IFR) for children and teenagers, and it is close to the estimate that the CDC offers for adults younger than 50. Even the estimated IFR for 50-to-64-year-olds implies that more than 99 percent of people in that age group who are infected by the COVID-19 virus will survive. The IFR estimate for people 65 or older is much higher, implying a survival rate of 91 percent.

The Instagram user ran off the rails when he compared the COVID-19 survival rate to the effectiveness rate for vaccines, which indicates the extent to which they reduce the risk of certain outcomes, such as infection, hospitalization, and death. But Politifact did not merely point out the fallacy of that inappropriate comparison; it declared that "the COVID-19 survival rate is not over 99%." According to the CDC's numbers, the statement Politifact deemed false appears to be true for all but the oldest age group.

When Politifact gets something wrong, critics have an opportunity to correct it. But when social media platforms suppress speech because of the government's behind-the-scenes "asks," it undermines the ordinary process for checking claims and counterclaims. It also drives people with mistaken beliefs further into an echo chamber where their statements are less likely to be challenged. The Times notes the proliferation of vaccine misinformation on alternative platforms such as Gab, Bitchute, and Telegram, which is a predictable result of the campaign to drive vaccine skeptics from mainstream services such as Facebook, Twitter, and YouTube.

Biden administration officials may think they are saving lives by pushing social media platforms to banish users whose messages might deter vaccination. But those efforts only harden resistance to vaccines by reinforcing the suspicion that anti-vaxxers are telling a truth the government does not want us to know. The alternative—addressing the concerns of vaccine-leery Americans by citing the evidence that contradicts their fears—is definitely harder, and success is by no means guaranteed. But at least it offers an opportunity to persuade people, which is how arguments are supposed to be resolved in a free society.