How Government Officials Bully Social Media Companies Into Censorship
A new Cato report sheds light on "jawboning," or attempts by state actors "to sway the decisions of private platforms and limit the publication of disfavored speech."
In July 2021, President Joe Biden was asked by a reporter whether he had any message for platforms like Facebook. "They're killing people," he replied. "The only pandemic we have is among the unvaccinated. And they're killing people."
His press secretary at the time, Jen Psaki, and legions of Democrats rushed to his defense, saying Biden was referring to so-called "misinformation" spread on the platform by the "disinformation dozen"—some 12 or so accounts deemed responsible for the vast majority of the platform's vaccine-skeptical content. But Biden, Psaki, and others in the administration have frequently used White House podiums to make bold and inexact claims about the harm posed by social media companies, either implicitly suggesting or more explicitly demanding that these companies change their content moderation practices in line with the administration's preferences.
"Facebook needs to move more quickly to remove harmful violative posts," Psaki said from her official perch, adding that "you shouldn't be banned from one platform and not others for providing misinformation," as if one platform has any control over the decisions made by the other.
This type of government pressure, as well as "bullying, threatening, and cajoling" that attempts "to sway the decisions of private platforms and limit the publication of disfavored speech" is known as jawboning, and is the subject of a new report put out by Will Duffield, a policy analyst at the Cato Institute. "Left unchecked, [jawboning] threatens to become normalized as an extraconstitutional method of speech regulation." More:
"Jawboning occurs when a government official threatens to use his or her power—be it the power to prosecute, regulate, or legislate—to compel someone to take actions that the state official cannot. Jawboning is dangerous because it allows government officials to assume powers not granted to them by law."
In summer 2021, for example, "Surgeon General Vivek Murthy issued an advisory on health misinformation, including eight guidelines for platforms," following the Psaki and Biden comments. "On its own, the advisory would have been inoffensive, but statements by other members of the administration suggested sanctions for noncompliant platforms," writes Duffield.
"White House communications director Kate Bedingfield completed the jawboning effort during a Morning Joe interview. Prompted by a question about getting rid of Section 230, she replied, 'we're reviewing that, and certainly they should be held accountable, and I think you've heard the president speak very aggressively about this …' By gesturing at changes to the intermediary liability protections that social media platforms rely on, Bedingfield added a vague threat to the administration's demands. …
By raising the specter of changes to, or the repeal of, Section 230, the Biden administration made a roundabout threat. Repealing Section 230 would not make vaccine misinformation unlawful, but it would harm Facebook by exposing it to litigation over its users' speech. By demanding the removal of misinformation and threatening repeal, the administration sought to bully Facebook into removing speech that the government couldn't touch."
Duffield has worked to track prominent examples of jawboning (database found here, replete with examples of censorship attempts by politicians of both parties). Though "not every demand is paired with a threat," he writes, "all the demands are made in the course of discussions about potential social media regulation."
"In general, we know that a lot of jawboning happens behind closed doors," Duffield tells Reason. "There were, prior to 2016, a couple of high-profile cases like Wikileaks," he notes, "but the current era of normalized platform jawboning really begins after the 2016 election." He highlights Sen. Dianne Feinstein's (D–Calif.) social media crackdown attempts on purported Russian disinformation back in 2017, as well as the possibility that Twitter's decision to limit the circulation of the Hunter Biden laptop story was due to how the company had come under fire, trotted before Congress, and subsequently strengthened their hacked materials policy. "If they didn't take that down, and it turns out to be a foreign op, and it changes the course of the election, they're going to be right back testifying in front of Congress, hammered with regulation and fines," noted disinformation researcher Clint Watts, according to the Cato report.
Some platforms have been broadly resistant to government officials' demands, whether explicit or implied. The cloud-based instant messaging service Telegram, Duffield notes, is "notorious for ignoring both requests and court orders," but this is probably due to the platform not being based in the U.S.; it thus has less reason to comply with U.S. laws than platforms like Facebook and Twitter. (Telegram, it's worth noting, is beating out competitors like Facebook Messenger and WhatsApp in much of Eastern Europe and the Middle East, including, interestingly, in both Ukraine and Russia.)
But Telegram's case is sadly an exception to the rule. Whether it's congressional staffers pressuring social media employees to make certain content moderation calls behind closed doors or sitting U.S. senators asking tech CEOs to testify before them—and sit pretty for their grandstanding and hectoring—"it is not the job of Congress to oversee, second guess, or direct the decisions of private intermediaries," writes Duffield. "Such oversight presumes a role in speech regulation that the Constitution specifically denies Congress."
It's not just Section 230, and changes to liability protections that members of Congress and Biden administration officials threaten companies with. "While some changes to Section 230 would change how platforms moderate speech, antitrust would harm or dismember the noncompliant firm," says Duffield.
This may well be the disturbing bipartisan future of jawboning.
Show Comments (76)