Free Speech

These Emails Show How the Biden Administration's Crusade Against 'Misinformation' Imposes Censorship by Proxy

Social media companies are eager to appease the government by suppressing disfavored speech.

|

On July 16, 2021, the day that Joe Biden accused Facebook of "killing people" by failing to suppress misinformation about COVID-19 vaccines, a senior executive at the social media platform's parent company emailed Surgeon General Vivek Murthy in an effort to assuage the president's anger. "Reaching out after what has transpired over the past few days following the publication of the misinformation advisory, and culminating today in the President's remarks about us," the Meta executive wrote. "I know our teams met today to better understand the scope of what the White House expects from us on misinformation going forward."

Murthy had just published an advisory in which he urged a "whole-of-society" effort to combat the "urgent threat to public health" posed by "health misinformation," possibly including "appropriate legal and regulatory measures." Biden's homicide charge came the next day, and Meta was keen to address the president's concerns by cracking down on speech that offended him.

The email, which was recently disclosed during discovery in a federal lawsuit that Louisiana Attorney General Jeff Landry and Missouri Attorney General Eric Schmitt filed in May, vividly illustrates how the Biden administration engages in censorship by proxy, pressuring social media platforms to implement speech restrictions that would be flagrantly unconstitutional if the government tried to impose them directly. Landry and Schmitt, both Republicans, argue that such pressure violates the First Amendment.

"Having threatened and cajoled social-media platforms for years to censor viewpoints and speakers disfavored by the Left," the lawsuit says, "senior government officials in the Executive Branch have moved into a phase of open collusion with social-media companies to suppress disfavored speakers, viewpoints, and content on social media platforms under the Orwellian guise of halting so-called 'disinformation,' 'misinformation,' and 'malinformation.'…As a direct result of these actions, there has been an unprecedented rise in censorship and suppression of free speech—including core political speech—on social-media platforms."

Landry and Schmitt reiterate that point in a "joint statement of discovery disputes" they filed yesterday in the U.S. District Court for the Western District of Louisiana. "Under the First Amendment, the federal Government should have no role in policing private speech or picking winners and losers in the marketplace of ideas," they say. "But that is what federal officials are doing, on a massive scale—a scale whose full scope and impact [are] yet to be determined."

So far, Schmitt reports, documents produced by the government in response to a court order have identified 45 federal officials who "communicate with social media platforms about 'misinformation' and censorship." Schmitt and Landry think many other officials are involved in "a vast 'Censorship Enterprise' across a multitude of federal agencies," and they are seeking additional documents to confirm that suspicion.

In response to inquiries, Landry and Schmitt say, "Facebook and Instagram identified 32 federal officials, including eight current and former White House officials," who have contacted them regarding "misinformation and censorship of social-media content." YouTube "identified 11 federal officials, including five current and former White House officials," while Twitter "identified nine federal officials, including at least one White House official."

Judging from the examples that Schmitt cites, the tenor of these communications has been cordial and collaborative. The social media companies are at pains to show that they share the government's goals, which is precisely the problem. Given the broad powers that the federal government has to make life difficult for these businesses through public criticism, litigation, regulation, and legislation, the Biden administration's "asks" for stricter moderation are tantamount to commands. The administration expects obsequious compliance, and that is what it gets.

Shortly after sending the July 16 email to Murthy, according to Landry and Schmitt's joint statement, the same Meta executive sent the surgeon general a text message. "It's not great to be accused of killing people," he said, adding that he was "keen to find a way to deescalate and work together collaboratively."

And so he did. "Thanks again for taking the time to meet earlier today," the Meta executive says in a July 23, 2021, email to an official at the Department of Health and Human Services.* "I wanted to make sure you saw the steps we took just this past week to adjust policies on what we are removing with respect to misinformation, as well as steps taken to further address the 'disinfo dozen.'" He brags that Meta has removed objectionable pages, groups, and Instagram accounts; taken steps to make several pages and profiles "more difficult to find on our platform"; and "expanded the group of false claims that we remove to keep up with recent trends."

Twitter also was eager to fall in line. "I'm looking forward to setting up regular chats," says an April 8, 2021, message from Twitter to the Centers for Disease Control and Prevention (CDC). "My team has asked for examples of problematic content so we can examine trends. All examples of misinformation are helpful, but in particular, if you have any examples of fraud—such as fraudulent covid cures, fraudulent vaccine cards, etc, that would be very helpful."

Twitter responded swiftly to the government's censorship suggestions. "Thanks so much for this," a Twitter official says in an April 16, 2021, email to the CDC. "We actioned (by labeling or removing) the Tweets in violation of our Rules." The message, which is headed "Request for problem accounts," is signed with "warmest" regards.

The government also got fast service from Instagram. In a July 20, 2021, email, Clarke Humphrey, digital director for the White House COVID-19 Response Team, requests the deletion of an Instagram parody of Anthony Fauci, Biden's top medical adviser. "Any way we can get this pulled down?" Humphrey asks. "It is not actually one of ours." Less than a minute later, he gets his answer: "Yep, on it!"

Twitter's desperation to please the Biden administration likewise went beyond deleting specific messages. Landry and Schmitt note "internal Twitter communications" indicating that senior White House officials "specifically pressured Twitter to deplatform" anti-vaccine writer Alex Berenson, "which Twitter did." In an April 16, 2021, email about a "Twitter VaccineMisinfo Briefing" on Zoom, Deputy Assistant to the President Rob Flaherty tells colleagues that Twitter will inform "White House staff" about "the tangible effects seen from recent policy changes, what interventions are currently being implemented in addition to previous policy changes, and ways the White House (and our COVID experts) can partner in product work."

Like Twitter, Facebook was thirsty for government guidance. In a July 28, 2021, email to the CDC headed "FB Misinformation Claims_Help Debunking," a Facebook official says, "I have been talking about in addition to our weekly meetings, doing a monthly disinfo/debunking meeting, with maybe claim topics communicated a few days prior so that you can bring in the matching experts and chat casually for 30 minutes or so. Is that something you'd be interested in?" The CDC's response is enthusiastic: "Yes, we would love to do that."

The communications uncovered so far mainly involved anti-vaccine messages, many of which are verifiably false. But Americans have a First Amendment right to express their opinions, no matter how misguided or ill-informed. That does not mean social media platforms are obligated to host those opinions. To the contrary, they have a First Amendment right to exercise editorial discretion. But that's not what is really happening when their decisions are shaped by implicit or explicit threats from the government. Notwithstanding all the friendly words, Facebook et al. have strong incentives to cooperate with a government that otherwise might punish them in various ways.

Ostensibly, the Biden administration is merely asking social media companies to enforce their own rules. But those rules are open to interpretation, and the government is encouraging the companies to read them more broadly than they otherwise might.

Maybe Twitter would have banished Alex Berenson even if White House officials had not intervened, but maybe not. Multiply that question across the myriad moderation decisions that social media platforms make every day, and you have a situation where it is increasingly difficult to tell whether they are exercising independent judgment or taking orders from the government.

"Although a 'private entity is not ordinarily constrained by the First Amendment,'" Supreme Court Justice Clarence Thomas noted in a 2021 concurrence, "it is if the government coerces or induces it to take action the government itself would not be permitted to do, such as censor expression of a lawful viewpoint….The government cannot accomplish through threats of adverse government action what the Constitution prohibits it from doing directly." That is the gist of the argument that Landry and Schmitt are making in their lawsuit.

The danger posed by the Biden administration's creepy crusade against "misinformation" is magnified by its broad definition of that concept, which encompasses speech that the government deems "misleading," even when it is arguably or demonstrably true. "Claims can be highly misleading and harmful even if the science on an issue isn't yet settled," Murthy says, and "what counts as misinformation can change over time with new evidence and scientific consensus."

In other words, the "scientific consensus," however Murthy defines it, can be wrong, as illustrated by the federal government's ever-evolving advice about the utility of face masks in preventing COVID-19 transmission. The CDC initially dismissed the value of general masking, then embraced it as "the most important, powerful public health tool we have." More recently, it has conceded that commonly used cloth masks do little, if anything, to stop coronavirus transmission.

"Twitter's 'COVID-19 misleading information policy,' as of December 2021, noted that Twitter will censor (label or remove) speech claiming that 'face masks…do not work to reduce transmission or to protect against COVID-19,'" Schmitt says. "Other platforms had similar policies. Both Senator Rand Paul and Florida Governor Ron DeSantis were censored by Youtube for questioning the efficacy of masks." Twitter even removed a mask-skeptical tweet by Scott Atlas, a member of the Trump administration's coronavirus task force. But "now," Schmitt says, "a growing body of science shows that masks, especially cloth masks, are ineffective at stopping the spread of COVID-19, and can impose negative impacts on children."

Landry and Schmitt's lawsuit also notes Twitter's blocking of the New York Post's story about Hunter Biden's laptop, which was deemed "disinformation" prior to the 2020 presidential election but turned out to be accurate. Social media companies have made similarly questionable decisions regarding discussion of the COVID-19 "lab leak" theory, which remains contested but has not been disproven.

Even acting on their own, social media platforms are bound to make bad calls. But  when the government demands that they all hew to an officially recognized "consensus," the threat to free inquiry and open debate is far graver.

*CORRECTION: The name of the HHS official is blacked out in the email obtained by Landry and Schmitt, so it's not clear whether the recipient was Murthy.