Free Speech

Some Critics of the Ruling Against Biden's Censorship by Proxy Have a Beef With the 1st Amendment Itself

"Disinformation" researchers alarmed by the injunction against government meddling with social media content admire legal regimes that allow broad speech restrictions.

|

Some critics of last week's preliminary injunction in Missouri v. Biden, which bars federal officials from encouraging social media platforms to suppress constitutionally protected speech, reject the premise that such contacts amount to government-directed censorship. Other critics, especially researchers who focus on "disinformation" and hate speech, pretty much concede that point but see nothing troubling about it. From their perspective, the problem is that complying with the First Amendment means tolerating inaccurate, misleading, and hateful speech that endangers public health, democracy, and social harmony.

The day after Terry Doughty, a judge on the U.S. District Court for the Western District of Louisiana, issued the injunction, The New York Times gave voice to those concerns in a piece headlined "Disinformation Researchers Fret About Fallout From Judge's Order." According to the subhead, those researchers "said a restriction on government interaction with social media companies could impede efforts to curb false claims about vaccines and voter fraud."

That much is true by definition. Doughty's injunction generally prohibits various agencies and officials from "meeting with social-media companies," "specifically flagging content or posts," or otherwise "urging, encouraging, pressuring, or inducing" the "removal, deletion, suppression, or reduction of content containing protected free speech." The injunction also bars the defendants from "threatening, pressuring, or coercing social-media companies" toward that end and from "urging, encouraging, pressuring, or inducing" them to "change their guidelines for removing, deleting, suppressing, or reducing content containing protected free speech."

The injunction includes some potentially sweeping exceptions. Among other things, it does not apply to "postings involving criminal activity or criminal conspiracies"; "national security threats, extortion, or other threats"; posts that "threaten the public safety or security of the United States"; "foreign attempts to influence elections"; posts "intending to mislead voters about voting requirements and procedures"; or "criminal efforts to suppress voting," "provide illegal campaign contributions," or launch "cyber-attacks against election infrastructure."

Some of these categories are commodious enough to encompass constitutionally protected speech by American citizens. In particular, "national security" is a broad, ill-defined excuse that might apply, for example, to information derived from classified sources or even to criticism of U.S. surveillance practices. The goal of resisting "foreign attempts to influence elections" can easily result in misidentification of Americans as Russian agents or mischaracterization of accurate reporting as foreign "disinformation."

But insofar as Doughty's order has bite, which it presumably does as it relates to COVID-19 "misinformation" and speech embracing Donald Trump's stolen-election fantasy, those anxious researchers surely are right that it "could impede efforts to curb false claims about vaccines and voter fraud." Notably, these critics take it for granted that preventing the government from demanding removal of disfavored content will have a substantial impact on the speech that platforms allow.

"Most misinformation or disinformation that violates social platforms' policies is flagged by researchers, nonprofits, or people and software at the platforms themselves," the Times notes. But "academics and anti-disinformation organizations often complained that platforms were unresponsive to their concerns." The paper reinforces that point with a quote from Viktorya Vilk, director for digital safety and free expression (!) at PEN America: "Platforms are very good at ignoring civil society organizations and our requests for help or requests for information or escalation of individual cases. They are less comfortable ignoring the government."

The reason social media companies are "less comfortable ignoring the government," of course, is that it exercises coercive power over them and could use that power to punish them for failing to censor speech it considers dangerous. In the 155-page opinion laying out the reasoning behind his injunction, Doughty notes implicit threats against recalcitrant platforms, including anti-trust actions, new regulations, and increased civil liability for content posted by users.

Doughty cites myriad communications that show administration officials expected platforms to promptly comply with the government's censorship "requests," which they typically did, and repeatedly complained when companies were less than fully cooperative. He emphasizes how keen Facebook et al. were to assuage President Joe Biden's anger at moderation practices that he said were "killing people."

The major platforms eagerly joined what Surgeon General Vivek Murthy described as a "whole-of-society" effort to combat the "urgent threat to public health" posed by "health misinformation," which he said might include "legal and regulatory measures." It beggars belief to suppose that the threat of such measures played no role in the platforms' responses to the administration's demands.

As the fretful researchers quoted by the Times see it, that is all as it should be. "Several disinformation researchers worried that the ruling could give cover for social media platforms, some of which have already scaled back their efforts to curb misinformation, to be even less vigilant before the 2024 election," the paper reports. Again, that concern assumes that the interactions covered by Doughty's injunction resulted in stricter rules and more aggressive enforcement, meaning less speech than otherwise would have been allowed.

The Times paraphrases Bond Benton, an associate communication professor at Montclair State University, who worries that Doughty's ruling "carried a message that misinformation qualifies as speech and its removal as the suppression of speech." As usual, the Times glides over disputes about what qualifies as "misinformation," which according to the Biden administration includes truthful content that it considers misleading or unhelpful. But since even a demonstrably false assertion "qualifies as speech" under the First Amendment, the "message" that troubles Benton is an accurate statement of constitutional law. That does not mean platforms cannot decide for themselves what content they are willing to host, but it does mean the government should not try to dictate such decisions.

The concerns expressed by Doughty's critics go beyond health-related and election-related "misinformation," and they go beyond the soundness of this particular ruling. In an interview with the Times, Imran Ahmed, chief executive of the Center for Countering Digital Hate, complained that the U.S. takes "a 'particularly fangless' approach to dangerous content compared with places like Australia and the European Union." Those comparisons are telling.

Australia's Online Safety Act empowers regulators to order removal of "illegal and restricted content," including images and speech classified as "cyberbullying" and "content that is inappropriate for children, such as high impact violence and nudity." Internet service providers that do not comply with complaint-triggered takedown orders within 24 hours are subject to civil penalties. The government also can order ISPs to block access to "material depicting, promoting, inciting or instructing in abhorrent violent conduct" for up to three months, after which the order can be renewed indefinitely.

Freedom House notes that Australia's law includes "no requirement for the eSafety Commissioner to give reasons for removal notices and provides no opportunity for users to respond to complaints." The organization adds that "civil society groups, tech companies, and other commentators have raised concerns about the law, including its speedy takedown requirements and its potential disproportionate effect on marginalized groups, such as sex workers, sex educators, LGBT+ people, and artists."

Australia's scheme plainly restricts or prohibits speech that would be constitutionally protected in the United States. Likewise the European Union's Digital Services Act, which covers "illegal content," a category that is defined broadly to include anything that runs afoul of a member nation's speech restrictions. E.U. countries such as France and Germany prohibit several types of speech that are covered by the First Amendment, including Holocaust denial, disparagement of minority groups, and promotion of racist ideologies.

These are the models that Ahmed thinks the U.S. should be following. "It's bananas that you can't show a nipple on the Super Bowl but Facebook can still broadcast Nazi propaganda, empower stalkers and harassers, undermine public health and facilitate extremism in the United States," he told the Times. "This court decision further exacerbates that feeling of impunity social media companies operate under, despite the fact that they are the primary vector for hate and disinformation in society."

Critics like Ahmed, in short, do not merely object to Doughty's legal analysis; they have a beef with the First Amendment itself, which allows Americans to express all sorts of potentially objectionable opinions. If you value that freedom, you probably consider it a virtue of the American legal system. But if your priority is eliminating "hate and disinformation," the First Amendment is, at best, an inconvenient obstacle.