The Volokh Conspiracy
Mostly law professors | Sometimes contrarian | Often libertarian | Always independent
Fifth Circuit Panel Reconsidering Part of Its Missouri v. Biden Decision
[UPDATE: Don't blog when tired or in a hurry! I regret to say the original post erroneously said the Fifth Circuit granted rehearing en banc -- the panel just granted panel rehearing, and I've corrected the post accordingly. My apologies for the error.]
[UPDATE 9/27/23: It appears that the order granting rehearing was a result of a clerical error by the Fifth Circuit Clerk of Court's office; the grant has been withdrawn.]
Here's my post from Sept. 9 on the then-recent panel decision, which the panel is now reconsidering (thanks to Howard Bashman [How Appealing] for the pointer), though who knows whether this will be a major change or only a minor one. Note that the petition that the panel just granted was filed by the challengers (Missouri et al.), and argues that the panel erred in finding no First Amendment violation by the Cybersecurity and Infrastructure Security Agency and the State Department's Global Engagement Center.
[* * *]
In yesterday's decision in Missouri v. Biden, the Fifth Circuit (Judges Edith Clement, Jennifer Elrod, and Don Willett) held that the federal government violated the First Amendment by causing social media platforms to block posts on various topics (including "the COVID-19 lab-leak theory, pandemic lockdowns, vaccine side-effects, election fraud, and the Hunter Biden laptop story").
The court acknowledged that the First Amendment doesn't bar social media platforms from acting on their own to restrict user speech, since the First Amendment applies only to the government and not to private parties (including large corporations). But the court concluded that the First Amendment may be violated "when a private party is coerced or significantly encouraged by the government to such a degree that its 'choice'—which if made by the government would be unconstitutional—'must in law be deemed to be that of the State.' This is known as the close nexus test."
As to what constitutes "significant[] encouragement by the government" to restrict speech, the court held:
For encouragement, we read the law to require that a governmental actor exercise active, meaningful control over the private party's decision in order to constitute a state action. That reveals itself in (1) entanglement in a party's independent decision-making or (2) direct involvement in carrying out the decision itself. In any of those scenarios, the state has such a "close nexus" with the private party that the government actor is practically "responsible" for the decision, because it has necessarily encouraged the private party to act and, in turn, commandeered its independent judgment.
As to what constitutes "coerc[ion]," the court held:
For coercion, we ask if the government compelled the decision by, through threats or otherwise, intimating that some form of punishment will follow a failure to comply…. [T]o help distinguish permissible persuasion from impermissible coercion, we turn to the Second (and Ninth) Circuit's four-factor test. Again, honing in on whether the government "intimat[ed] that some form of punishment" will follow a "failure to accede," we parse the speaker's messages to assess the (1) word choice and tone, including the overall "tenor" of the parties' relationship; (2) the recipient's perception; (3) the presence of authority, which includes whether it is reasonable to fear retaliation; and (4) whether the speaker refers to adverse consequences.
Each factor, though, has important considerations to keep in mind. For word choice and tone, "[a]n interaction will tend to be more threatening if the official refuses to take 'no' for an answer and pesters the recipient until it succumbs." That is so because we consider the overall "tenor" of the parties' relationship. For authority, there is coercion even if the speaker lacks present ability to act so long as it can "reasonably be construed" as a threat worth heeding.
As for perception, it is not necessary that the recipient "admit that it bowed to government pressure," nor is it even "necessary for the recipient to have complied with the official's request"—"a credible threat may violate the First Amendment even if 'the victim ignores it, and the threatener folds his tent.'" Still, a message is more likely to be coercive if there is some indication that the party's decision resulted from the threat. Finally, as for adverse consequences, the government need not speak its threat aloud if, given the circumstances, it is fair to say that the message intimates some form of punishment. If these factors weigh in favor of finding the government's message coercive, the coercion test is met, and the private party's resulting decision is a state action.
(Note that there is a good deal of caselaw on the coercion side, but much less on the significant encouragement side. Courts have suggested in the past that significant encouragement, even when it's not coercive, may implicate the government in the encouraged parties' decision. But the court cited few appellate cases to actually apply this to invalidate government action, and those struck me as quite different in the nature of the government action involved. In this respect, this case seems to set an important new precedent, unless it's overturned by the Supreme Court.)
Applying the tests, the court held "that the White House, acting in concert with the Surgeon General's office, likely … coerced the platforms to make their moderation decisions by way of intimidating messages and threats of adverse consequences":
Generally speaking, officials from the White House and the Surgeon General's office had extensive, organized communications with platforms. They met regularly, traded information and reports, and worked together on a wide range of efforts. That working relationship was, at times, sweeping. Still, those facts alone likely are not problematic from a First-Amendment perspective. But, the relationship between the officials and the platforms went beyond that. In their communications with the platforms, the officials went beyond advocating for policies, or making no-strings-attached requests to moderate content….
We start with coercion. On multiple occasions, the officials coerced the platforms into direct action via urgent, uncompromising demands to moderate content. Privately, the officials were not shy in their requests—they asked the platforms to remove posts "ASAP" and accounts "immediately," and to "slow[] down" or "demote[]" content. In doing so, the officials were persistent and angry. When the platforms did not comply, officials followed up by asking why posts were "still up," stating (1) "how does something like [this] happen," (2) "what good is" flagging if it did not result in content moderation, (3) "I don't know why you guys can't figure this out," and (4) "you are hiding the ball," while demanding "assurances" that posts were being taken down.
And, more importantly, the officials threatened—both expressly and implicitly—to retaliate against inaction. Officials threw out the prospect of legal reforms and enforcement actions while subtly insinuating it would be in the platforms' best interests to comply. As one official put it, "removing bad information" is "one of the easy, low-bar things you guys [can] do to make people like me"—that is, White House officials—"think you're taking action."
That alone may be enough for us to find coercion. Like in Bantam Books v. Sullivan (1963), the officials here set about to force the platforms to remove metaphorical books from their shelves. It is uncontested that, between the White House and the Surgeon General's office, government officials asked the platforms to remove undesirable posts and users from their platforms, sent follow-up messages of condemnation when they did not, and publicly called on the platforms to act. When the officials' demands were not met, the platforms received promises of legal regime changes, enforcement actions, and other unspoken threats. That was likely coercive.
That being said, even though coercion may have been readily apparent here, we find it fitting to consult the Second Circuit's four-factor test for distinguishing coercion from persuasion. In asking whether the officials' messages can "reasonably be construed" as threats of adverse consequences, we look to (1) the officials' word choice and tone; (2) the recipient's perception; (3) the presence of authority; and (4) whether the speaker refers to adverse consequences.
First, the officials' demeanor. We find, like the district court, that the officials' communications—reading them in "context, not in isolation"—were on-the-whole intimidating. In private messages, the officials demanded "assurances" from the platforms that they were moderating content in compliance with the officials' requests, and used foreboding, inflammatory, and hyper-critical phraseology when they seemingly did not, like "you are hiding the ball," you are not "trying to solve the problem," and we are "gravely concerned" that you are "one of the top drivers of vaccine hesitancy." In public, they said that the platforms were irresponsible, let "misinformation [] poison" America, were "literally costing … lives," and were "killing people." While officials are entitled to "express their views and rally support for their positions," the "word choice and tone" applied here reveals something more than mere requests….
[M]any of the officials' asks were "phrased virtually as orders," like requests to remove content "ASAP" or "immediately." The threatening "tone" of the officials' commands, as well as of their "overall interaction" with the platforms, is made all the more evident when we consider the persistent nature of their messages. Generally speaking, "[a]n interaction will tend to be more threatening if the official refuses to take 'no' for an answer and pesters the recipient until it succumbs." Urgency can have the same effect. See Backpage.com v. Dart (7th Cir. 2015) (finding the "urgency" of a sheriff's letter, including a follow-up, "imposed another layer of coercion due to its strong suggestion that the companies could not simply ignore" the sheriff). Here, the officials' correspondences were both persistent and urgent. They sent repeated follow-up emails, whether to ask why a post or account was "still up" despite being flagged or to probe deeper into the platforms' internal policies. On the latter point, for example, one official asked at least twelve times for detailed information on Facebook's moderation practices and activities.
Admittedly, many of the officials' communications are not by themselves coercive. But, we do not take a speaker's communications "in isolation." Instead, we look to the "tenor" of the parties' relationship and the conduct of the government in context. Given their treatment of the platforms as a whole, we find the officials' tone and demeanor was coercive, not merely persuasive.
Second, we ask how the platforms perceived the communications. Notably, "a credible threat may violate the First Amendment even if 'the victim ignores it, and the threatener folds his tent.'" Still, it is more likely to be coercive if there is some evidence that the recipient's subsequent conduct is linked to the official's message…. Here, there is plenty of evidence—both direct and circumstantial, considering the platforms' contemporaneous actions—that the platforms were influenced by the officials' demands.
When officials asked for content to be removed, the platforms took it down. And, when they asked for the platforms to be more aggressive, "interven[e]" more often, take quicker actions, and modify their "internal policies," the platforms did—and they sent emails and assurances confirming as much. For example, as was common after public critiques, one platform assured the officials they were "committed to addressing the [] misinformation that you've called on us to address" after the White House issued a public statement.
Another time, one company promised to make an employee "available on a regular basis" so that the platform could "automatically prioritize" the officials' requests after criticism of the platform's response time. Yet another time, a platform said it was going to "adjust [its] policies" to include "specific recommendations for improvement" from the officials, and emailed as much because they "want[ed] to make sure to keep you informed of our work on each" change. Those are just a few of many examples of the platforms changing—and acknowledging as much—their course as a direct result of the officials' messages.
Third, we turn to whether the speaker has "authority over the recipient." Here, that is clearly the case. As an initial matter, the White House wields significant power in this Nation's constitutional landscape. It enforces the laws of our country, and—as the head of the executive branch—directs an army of federal agencies that create, modify, and enforce federal regulations…. At the very least, as agents of the executive branch, the officials' powers track somewhere closer to those of the commission in Bantam Books—they were legislatively given the power to "investigate violations[] and recommend prosecutions."
But, authority over the recipient does not have to be a clearly-defined ability to act under the close nexus test. Instead, a generalized, non-descript means to punish the recipient may suffice depending on the circumstances…. [A] message may be "inherently coercive" if, for example, it was conveyed by a "law enforcement officer" or "penned by an executive official with unilateral power." In other words, a speaker's power may stem from an inherent authority over the recipient. That reasoning is likely applicable here, too, given the officials' executive status.
It is not even necessary that an official have direct power over the recipient. Even if the officials "lack[ed] direct authority" over the platforms, the cloak of authority may still satisfy the authority prong….
True, the government can "appeal[]" to a private party's "interest in avoiding liability" so long as that reference is not meant to intimidate or compel. But here, the officials' demands that the platforms remove content and change their practices were backed by the officials' unilateral power to act or, at the very least, their ability to inflict "some form of punishment" against the platforms. Therefore, the authority factor weighs in favor of finding the officials' messages coercive.
Finally, and "perhaps most important[ly]," we ask whether the speaker "refers to adverse consequences that will follow if the recipient does not accede to the request." Explicit and subtle threats both work—"an official does not need to say 'or else' if a threat is clear from the context." Again, this factor is met.
Here, the officials made express threats and, at the very least, leaned into the inherent authority of the President's office. The officials made inflammatory accusations, such as saying that the platforms were "poison[ing]" the public, and "killing people." The platforms were told they needed to take greater responsibility and action. Then, they followed their statements with threats of "fundamental reforms" like regulatory changes and increased enforcement actions that would ensure the platforms were "held accountable." But, beyond express threats, there was always an "unspoken 'or else.' After all, as the executive of the Nation, the President wields awesome power. The officials were not shy to allude to that understanding native to every American—when the platforms faltered, the officials warned them that they were "[i]nternally … considering our options on what to do," their "concern[s] [were] shared at the highest (and I mean highest) levels of the [White House]," and the "President has long been concerned about the power of large social media platforms." …
Given all of the above, we are left only with the conclusion that the officials' statements were coercive….
And the court held that the White House and the Surgeon General's office "also significantly encouraged the platforms to moderate content by exercising active, meaningful control over those decisions" by "entangl[ing] themselves in the platforms' decision-making processes, namely their moderation policies"—an independent basis, in the court's view, for treating the government's action as state action, even apart from coercion:
The officials had consistent and consequential interaction with the platforms and constantly monitored their moderation activities. In doing so, they repeatedly communicated their concerns, thoughts, and desires to the platforms. The platforms responded with cooperation—they invited the officials to meetings, roundups, and policy discussions. And, more importantly, they complied with the officials' requests, including making changes to their policies.
The officials began with simple enough asks of the platforms—"can you share more about your framework here" or "do you have data on the actual number" of removed posts? But, the tenor later changed. When the platforms' policies were not performing to the officials' liking, they pressed for more, persistently asking what "interventions" were being taken, "how much content [was] being demoted," and why certain posts were not being removed.
Eventually, the officials pressed for outright change to the platforms' moderation policies. They did so privately and publicly. One official emailed a list of proposed changes and said, "this is circulating around the building and informing thinking." The White House Press Secretary called on the platforms to adopt "proposed changes" that would create a more "robust enforcement strategy." And the Surgeon General published an advisory calling on the platforms to "[e]valuate the effectiveness of [their] internal policies" and implement changes. Beyond that, they relentlessly asked the platforms to remove content, even giving reasons as to why such content should be taken down. They also followed up to ensure compliance and, when met with a response, asked how the internal decision was made.
And, the officials' campaign succeeded. The platforms, in capitulation to state-sponsored pressure, changed their moderation policies. The platforms explicitly recognized that. For example, one platform told the White House it was "making a number of changes"—which aligned with the officials' demands—as it knew its "position on [misinformation] continues to be a particular concern" for the White House. The platform noted that, in line with the officials' requests, it would "make sure that these additional [changes] show results—the stronger demotions in particular should deliver real impact." Similarly, one platform emailed a list of "commitments" after a meeting with the White House which included policy "changes" "focused on reducing the virality" of anti-vaccine content even when it "does not contain actionable misinformation." Relatedly, one platform told the Surgeon General that it was "committed to addressing the [] misinformation that you've called on us to address," including by implementing a set of jointly proposed policy changes from the White House and the Surgeon General.
Consequently, it is apparent that the officials exercised meaningful control—via changes to the platforms' independent processes—over the platforms' moderation decisions. By pushing changes to the platforms' policies through their expansive relationship with and informal oversight over the platforms, the officials imparted a lasting influence on the platforms' moderation decisions without the need for any further input. In doing so, the officials ensured that any moderation decisions were not made in accordance with independent judgments guided by independent standards. Instead, they were encouraged by the officials' imposed standards.
In sum, we find that the White House officials, in conjunction with the Surgeon General's office, coerced and significantly encouraged the platforms to moderate content. As a result, the platforms' actions "must in law be deemed to be that of the State."
The court also found impermissible coercion and significant encouragement as to certain FBI requests:
We start with coercion. Similar to the White House, Surgeon General, and CDC officials, the FBI regularly met with the platforms, shared "strategic information," frequently alerted the social media companies to misinformation spreading on their platforms, and monitored their content moderation policies. But, the FBI went beyond that—they urged the platforms to take down content. Turning to the Second Circuit's four-factor test, we find that those requests were coercive. [Details omitted. -EV] …
We also find that the FBI likely significantly encouraged the platforms to moderate content by entangling themselves in the platforms' decision-making processes. Beyond taking down posts, the platforms also changed their terms of service in concert with recommendations from the FBI. For example, several platforms "adjusted" their moderation policies to capture "hack-and-leak" content after the FBI asked them to do so (and followed up on that request). Consequently, when the platforms subsequently moderated content that violated their newly modified terms of service (e.g., the results of hack-and-leaks), they did not do so via independent standards. Instead, those decisions were made subject to commandeered moderation policies.
In short, when the platforms acted, they did so in response to the FBI's inherent authority and based on internal policies influenced by FBI officials. Taking those facts together, we find the platforms' decisions were significantly encouraged and coerced by the FBI.
As to the CDC, the court held that, "although not plainly coercive, the CDC officials likely significantly encouraged the platforms' moderation decisions, meaning they violated the First Amendment":
We start with coercion. Here, like the other officials, the CDC regularly met with the platforms and frequently flagged content for removal. But, unlike the others, the CDC's requests for removal were not coercive—they did not ask the platforms in an intimidating or threatening manner, do not possess any clear authority over the platforms, and did not allude to any adverse consequences. Consequently, we cannot say the platforms' moderation decisions were coerced by CDC officials.
The same, however, cannot be said for significant encouragement. Ultimately, the CDC was entangled in the platforms' decision-making processes.
The CDC's relationship with the platforms began by defining—in "Be On the Lookout" meetings—what was (and was not) "misinformation" for the platforms. Specifically, CDC officials issued "advisories" to the platforms warning them about misinformation "hot topics" to be wary of. From there, CDC officials instructed the platforms to label disfavored posts with "contextual information," and asked for "amplification" of approved content. That led to CDC officials becoming intimately involved in the various platforms' day-to-day moderation decisions. For example, they communicated about how a platform's "moderation team" reached a certain decision, how it was "approach[ing] adding labels" to particular content, and how it was deploying manpower. Consequently, the CDC garnered an extensive relationship with the platforms.
From that relationship, the CDC, through authoritative guidance, directed changes to the platforms' moderation policies. At first, the platforms asked CDC officials to decide whether certain claims were misinformation. In response, CDC officials told the platforms whether such claims were true or false, and whether information was "misleading" or needed to be addressed via CDC-backed labels. That back-and-forth then led to "[s]omething more."
Specifically, CDC officials directly impacted the platforms' moderation policies. For example, in meetings with the CDC, the platforms actively sought to "get into [] policy stuff" and run their moderation policies by the CDC to determine whether the platforms' standards were "in the right place." Ultimately, the platforms came to heavily rely on the CDC. They adopted rule changes meant to implement the CDC's guidance. As one platform said, they "were able to make [changes to the 'misinfo policies'] based on the conversation [they] had last week with the CDC," and they "immediately updated [their] policies globally" following another meeting. And, those adoptions led the platforms to make moderation decisions based entirely on the CDC's say-so—"[t]here are several claims that we will be able to remove as soon as the CDC debunks them; until then, we are unable to remove them." That dependence, at times, was total. For example, one platform asked the CDC how it should approach certain content and even asked the CDC to double check and proofread its proposed labels.
Viewing these facts, we are left with no choice but to conclude that the CDC significantly encouraged the platforms' moderation decisions. Unlike in Blum v. Yaretsky (1982), the platforms' decisions were not made by independent standards, but instead were marred by modification from CDC officials. Thus, the resulting content moderation, "while not compelled by the state, was so significantly encouraged, both overtly and covertly" by CDC officials that those decisions "must in law be deemed to be that of the state."
But the court concluded that, as to the National Institute of Allergy and Infectious Diseases, the State Department, and the Cybersecurity and Infrastructure Security Agency, "there was not, at this stage, sufficient evidence to find that it was likely these groups coerced or significantly encouragement the platforms":
For the NIAID officials, it is not apparent that they ever communicated with the social-media platforms. Instead, the record shows, at most, that public statements by Director Anthony Fauci and other NIAID officials promoted the government's scientific and policy views and attempted to discredit opposing ones—quintessential examples of government speech that do not run afoul of the First Amendment….
As for the State Department, while it did communicate directly with the platforms, so far there is no evidence these communications went beyond educating the platforms on "tools and techniques" used by foreign actors. There is no indication that State Department officials flagged specific content for censorship, suggested policy changes to the platforms, or engaged in any similar actions that would reasonably bring their conduct within the scope of the First Amendment's prohibitions. After all, their messages do not appear coercive in tone, did not refer to adverse consequences, and were not backed by any apparent authority. And, per this record, those officials were not involved to any meaningful extent with the platforms' moderation decisions or standards.
Finally, although CISA flagged content for social-media platforms as part of its switchboarding operations, based on this record, its conduct falls on the "attempts to convince," not "attempts to coerce," side of the line. There is not sufficient evidence that CISA made threats of adverse consequences—explicit or implicit—to the platforms for refusing to act on the content it flagged. Nor is there any indication CISA had power over the platforms in any capacity, or that their requests were threatening in tone or manner. Similarly, on this record, their requests—although certainly amounting to a non-trivial level of involvement—do not equate to meaningful control. There is no plain evidence that content was actually moderated per CISA's requests or that any such moderation was done subject to non-independent standards….
The court "emphasize[d] the limited reach of [its] decision":
We do not uphold the injunction against all the officials named in the complaint. Indeed, many of those officials were permissibly exercising government speech, "carrying out [their] responsibilities," or merely "engaging in [a] legitimate [] action." That distinction is important because the state-action doctrine is vitally important to our Nation's operation—by distinguishing between the state and the People, it promotes "a robust sphere of individual liberty." … If just any relationship with the government "sufficed to transform a private entity into a state actor, a large swath of private entities in America would suddenly be turned into state actors and be subject to a variety of constitutional constraints on their activities." So, we do not take our decision today lightly.
But, the Supreme Court has rarely been faced with a coordinated campaign of this magnitude orchestrated by federal officials that jeopardized a fundamental aspect of American life. Therefore, the district court was correct in its assessment—"unrelenting pressure" from certain government officials likely "had the intended result of suppressing millions of protected free speech postings by American citizens."
And the court held that the district court injunction was overbroad:
[Parts of the injunction] prohibit the officials from engaging in, essentially, any action "for the purpose of urging, encouraging, pressuring, or inducing" content moderation. But "urging, encouraging, pressuring" or even "inducing" action does not violate the Constitution unless and until such conduct crosses the line into coercion or significant encouragement….
[Certain other] provisions likewise may be unnecessary to ensure Plaintiffs' relief. A government actor generally does not violate the First Amendment by simply "following up with social-media companies" about content-moderation, "requesting content reports from social-media companies" concerning their content-moderation, or asking social media companies to "Be on The Lookout" for certain posts….
These provisions are vague as well. There would be no way for a federal official to know exactly when his or her actions cross the line from permissibly communicating with a social-media company to impermissibly "urging, encouraging, pressuring, or inducing" them "in any way." …
Finally, [one other] prohibition—which bars the officials from "collaborating, coordinating, partnering, switchboarding, and/or jointly working with the Election Integrity Partnership, the Virality Project, the Stanford Internet Observatory, or any like project or group" to engage in the same activities the officials are proscribed from doing on their own—may implicate private, third-party actors that are not parties in this case and that may be entitled to their own First Amendment protections. Because the provision fails to identify the specific parties that are subject to the prohibitions, and "exceeds the scope of the parties' presentation." …
That leaves [one remaining provision], which bars the officials from "threatening, pressuring, or coercing social-media companies in any manner to remove, delete, suppress, or reduce posted content of postings containing protected free speech." But, those terms could also capture otherwise legal speech. So, the injunction's language must be further tailored to exclusively target illegal conduct and provide the officials with additional guidance or instruction on what behavior is prohibited….[It] is MODIFIED to state:
Defendants, and their employees and agents, shall take no actions, formal or informal, directly or indirectly, to coerce or significantly encourage social-media companies to remove, delete, suppress, or reduce, including through altering their algorithms, posted social-media content containing protected free speech. That includes, but is not limited to, compelling the platforms to act, such as by intimating that some form of punishment will follow a failure to comply with any request, or supervising, directing, or otherwise meaningfully controlling the social-media companies' decision-making processes.
Under the modified injunction, the enjoined Defendants cannot coerce or significantly encourage a platform's content-moderation decisions. Such conduct includes threats of adverse consequences—even if those threats are not verbalized and never materialize—so long as a reasonable person would construe a government's message as alluding to some form of punishment. That, of course, is informed by context (e.g., persistent pressure, perceived or actual ability to make good on a threat). The government cannot subject the platforms to legal, regulatory, or economic consequences (beyond reputational harms) if they do not comply with a given request. The enjoined Defendants also cannot supervise a platform's content moderation decisions or directly involve themselves in the decision itself. Social-media platforms' content-moderation decisions must be theirs and theirs alone. This approach captures illicit conduct, regardless of its form….
Note that, when a court of appeals strikes down a federal statute, and the federal government then asks the Supreme Court to review the matter, the Court is very likely to say yes. The Court's view is that the judiciary may properly tell Congress that it can't do something—but if that's done, that should be the province of the Supreme Court, and not one of the lower courts. I expect the Justices would take the same view of an injunction that orders the President not to do things; if the Solicitor General seeks review by the Court, the Court is likely to agree to hear the matter.
Editor's Note: We invite comments and request that they be civil and on-topic. We do not moderate or assume any responsibility for comments, which are owned by the readers who post them. Comments do not represent the views of Reason.com or Reason Foundation. We reserve the right to delete any comment for any reason at any time. Comments may only be edited within 5 minutes of posting. Report abuses.
Please
to post comments
I'm pretty sure it was panel rehearing, not en banc rehearing, no?
I don’t see anything wrong with the government screening social media posts for accuracy and tone.
That is of course if I can screen government communications for accuracy and tone.
Of course one of those is unconstitutional.
It's naive to claim that, when a company is being coerced by (say) the FBI that incessant badgering from an Other Government Agency is non-coercive just because it falls short of independent coercion. When it comes from another arm of the same executive branch, there's an obvious implication that coercion applies across the set of concerted efforts to the same end.
Most everyone agrees that a private company has the right to what is said on their platform. Most everyone also agrees that the Government cannot censor speech on those private company platforms. The issue in dispute is at what point does the level of government activity in the censorship cross the line from the private company actions to when it is effectively government actions.
The factual record strongly suggests the government actions effectively crossed that line. Further much of the censorship was done so that the message matched the CDC message which was demonstrately wrong on much of the covid science, especially the masking studies , the vaccine effectiveness and the value of vaccinating the young.
How do you figure?
How do you not figure?
Noscitur a sociis 13 mins ago
Flag Comment Mute User
The factual record strongly suggests the government actions effectively crossed that line.
"How do you figure?"
From the evidence presented in court.
I think it doesn't establish that. Cool debate.
Explain your reasoning why the evidence is insufficient.
Your conclusions are not interesting (unless maybe to your spouse) if not supported by analytic reasoning. Which sections of the court's analysis do you disagree with and why?
Your conclusions are not interesting (unless maybe to your spouse) if not supported by analytic reasoning.
This is also true of the 5C opinion. They don't lay a factual predicate, they just say 'sometimes you acted pretty entitled, even if the companies didn't always follow what you asked - COERCIAN!'
The district court and appeals panel both concluded that there was prima facie coercion by the government. What I take issue is with the appeals panel narrowing the injunction against certain agencies. Given that they’re all part of the same executive branch, that seems akin to arguing that only the left hand was coercive in a strict sense, so the right hand should be free to keep acting along the same lines as the left.
Which right to content hosted on a social medium platform?
I have read several social medium platform terms of service/user agreements. They all stated that the content, of which the social medium platform has bailment, belongs not to the social medium platform but to a user that wants his message to be transported to another user of the platform.
Bailment of user content on a backend server of a common carrier of messages is not speech, and SCOTUS has never addressed whether a social medium platform is a common carrier of messages.
Netchoice LLC v. Moody, 546 F. Supp. 3d 1082, 1091 (N.D. Fla. 2021)
I am baffled by an assertion that alleges a service could partially constitute common carriage.
A social medium platform differs from an Internet email service only by having a niftier user interface, and the FCC has long held that an Email service is a common carriage service.
A social medium platform comes under the standard definition of a telegraph system: a system that transmits a message electrically by wire or by wireless means.
Unfortunately, no party has ever presented SCOTUS with a question that asked whether a social medium platform is a common carrier of messages.
Title 47 distinguishes between a common carrier and a telecommunications carrier.
The First Amendment severely restricts the circumstances under which governments can tell social media platforms to remove content.
Still gibberish.
Your first bit of accurate legal analysis!
Both of those claims are utterly false.
The LinkedIn user agreement states the following.
LinkedIn seems to have bailment of user content that is stored on a LinkedIn server.
You quoted words that have literally nothing to do with the concept of bailment.
bailment
A 'bailment' is defined as a non-ownership transfer of possession. Under English common law, the right to possess a thing is separate and distinct from owning the thing. Interestingly, as a result of this distinction, in some jurisdictions, an owner of an object can steal their own property. In context, an owner who lends someone else an article, then secretly takes it back, can be stealing.
When a bailment is created, the article is said to have been 'bailed'. One who delivers the article is the bailor. One who receives a 'bailed' article is the bailee.
See e.g., Mack v. Davidson 391 N.Y.S.2d 497 (1977)
In some cases the court will simultaneously grant the petition for rehearing and issue a revised opinion correcting or clarifying the original, without changing the outcome. Basically saying "you're right, we should have addressed the binding precedent God v. Satan, but you still lose." The Fifth Circuit granted rehearing without opinion.
I'm just a layman and maybe I missed something; but it seems to me the court's "logic", while sensible enough as far as it goes, overlooks something important.
If the government wants to correct mistaken/misleading online posts, why don't they just join the online "discussion" and post what they think is wrong and what the government believes is correct? Just like non-government actors who disagree with what was said must do.
Why would the government be privately ("secretly") contacting the host of the allegedly incorrect/misleading posts and trying to get those posts removed? Is that approach not itself evidence of unconstitutional coercion?
No.
Any other trivial questions?
Yes. Any other trivially incorrect answers?
Anyone is of course free to contact a social media service privately and attempt to persuade them to remove a post. But the First Amendment restricts the government's ability to make threats in order to achieve that result.
Why is that considered trivial? We know that the response to bad speech is MORE speech.
Not sure this holds 100% true in a world of sealioning, gish gallops and bot farms.
Nor was that pleasant-sounding nostrum true in the past, or in the present, in cases where so-called bad speech—particularly defamations—inflicted actual damages which, "more speech," remains powerless to remedy.
The left have recently adopted the position that the response to bad speech is silencing it, because maybe somebody out there won't agree with the left about which speech is bad, if they get to hear it.
If the government wants to correct mistaken/misleading online posts, why don’t they just join the online “discussion” and post what they think is wrong and what the government believes is correct? Just like non-government actors who disagree with what was said must do.
Even corrected, misinformation can still misinform.
Why would the government be privately (“secretly”) contacting the host of the allegedly incorrect/misleading posts and trying to get those posts removed? Is that approach not itself evidence of unconstitutional coercion?
Convenience. I'm not sure the government was particularly concerned about keeping its actions secret here. The reason for messaging is just that companies are more responsive to "the government" than random individuals. Some of that is the air of authority even if it isn't backed by any threat of enforcement.
Though an automatic public record would probably have made them more restrained.
And that is bad because...
You think people being lied to, and believing those lies, is a good thing?
Surely not. I mean, if people doubt their faith because they saw a post denying that God exists, the damage can't be undone.
It’s vital to current Republican strategy.
Which batch of lies are you referring to
That HCQ or Ivermectin were effective treatment for covid (which neither were effective) or
The misrepresentations by CDC etc, such as masking was effective, or that children being vaxed provided significant benefits or that the vax remained highly effective, or that vaxing provided better long term immunity than natural infection, etc.
The GOP switch to full postmodern nothing is inherently true is amazing to see.
You believe a lot of things that are wrong. You also think misinformation is good.
Perhaps there is a connection here.
Sacastro – care to point to a single item I made in that statement that if factually incorrect.
Are you even aware of the multitude of discredited masking studies still posted on the CDC website.
Right wingers howling denials do not equal, "discredited."
Also? Vaxing provided far better long-term immunity than natural infection, because vaxing was so much less likely to kill the patient in the short term.
Stephen Lathrop 6 mins ago “Also? Vaxing provided far better long-term immunity than natural infection,”
Lathrop – that statement is flat out wrong – Throughly discredited. Quite frankly it is astonishing that anyone would continue to believe that. Though in your defense - it fits right in with the frequent misinformation provided by the CDC.
Nobody here is disputing that the vaccine is safer than a natural infection, at least for groups at significant risk from Covid. Just whether it’s more effective.
If I’d had the choice between being vaccinated for Covid, or getting it the first time around, I’d have absolutely gone for the shot. I just happened to contract it before the vaccine was locally available for my cohort.
But, having already been subjected to that risk, and having gained the resulting immunity the hard way, it pissed me off royally to be told that I had to get an utterly redundant at that point vaccination, too, because the government was pretending that having had covid didn’t confer meaningful immunity.
Brett Bellmore 12 mins ago (edited)
"Nobody here is disputing that the vaccine is safer than a natural infection, at least for groups at significant risk from Covid. Just whether it’s more effective."
Brett - that is a very important caveat that is ignored by those who believe one size fits all. For the subgroup of the population that is at risk for adverse consequences of covid, the vaccine definitely provided significant benefits. However, the majority of the population (80%+), the vaccine provided only marginal benefits or less in terms of lowering the risk of adverse outcome from covid. As such there was little to be gained in terms of reduction in risk for an adverse outcome from covid while at the same time achieving a lower level of long term immunity from a vaccine that has only short term effectiveness.
a) Obviously
b) JEP41's claim was that the take-down requests were evidence of ill-intent since a rebuttal in the forum would be just as effective. This is false.
Since the Federal government can't encourage certain moderation policies they should just pass a law requiring dictating their moderation policies. The 5th circuit is apparently fine with that.
Public accommodation laws already exist.
There is, of course, a clear legal and ethical difference between the government saying, you may not exclude people from lunch counters on the basis of race, and telling them that they must exclude persons from the lunch counter due to being the "wrong" race.
Except the companies weren't actually moderating people based on their politics. They were moderating (albeit, imperfectly) based on abuse, legality, and other viewpoint neutral factors.
And certainly there's issues with the platform being compelled to broadcast certain speech, or lose the ability to protect their communities from abuse.
Except the companies weren’t actually moderating people based on their politics.
Objection. Assumes facts not in evidence.
Well, he's right, if you disregard active, loud, public, direct threats by politicians to destroy section 230, turning them into a target rich lawsuit environment, crushing their business model and dragging them down to normal business levels aka subtracting hundreds of billions of dollars in stock valuation.
Aside from that, yeah, it was of these private organizations' free will.
Yeah this did not happen. Some populist puffery from a few people that any sophisticated company’s federal relations team would know is not a threat.
One clue is that we have insider discussions about twitters decision along and this did not come up.
Yiu love this story. It is a lie you have told yourself.
Notably the idiots up in arms about the so-called coercion don't have a single word to say in objecting to this behavior.
I'm glad someone else brought this case up. Here is another link on it:
https://www.techdirt.com/2022/09/16/5th-circuit-rewrites-a-century-of-1st-amendment-law-to-argue-internet-companies-have-no-right-to-moderate/
For starters, this is the opposite of the issue of the government coercion concerning Covid.
That said, I thoroughly agree with this decision. Social media is now the town/public square, and any coercion by the owners means denying people their first amendment rights.
As liberals (and libertarians) are so fond of saying: the constitution is a LIVING document, and must keep up with the times.
'Social media is now the town/public square,'
This seems like a fatal misconception, given that it is not truly 'public' inasmuch as they are all privately owned and subject to the whims and mismanagement of the owners. The way twitter has gone from a miracle communication tool that allowed for almost instant mass communication and mobilisation during emergencies to a possible impediment and danger during emergencies illustrates this.
The Communist GOP wants to nationalize social media as a public good.
Sarcastr0 2 hours ago
"The Communist GOP ..."
good to know you no longer have any interest in being honest
I have a question about the 2nd Circuit's 4-part test. Did the case which generated that test involve alleged coercion of a publisher protected by the 1A press freedom clause? If it did not, then I think use of that test in this instance is inappropriate.
Everyone is protected by the first amendment. It grants rights to "the people", not merely "the fourth estate" or some idiosyncratic definition of "publishers" that can only be found in your head.
wnoise, people define themselves as publishers by practicing publishing activities. It is those activities which are protected by 1A press freedom, not particular people as members of some privileged class.
Anyone is free to become a publisher. Relatively few people choose to do so, but everyone who does so enjoys the full protection of the 1A Press Freedom clause for their publishing activities.
But please note, Joe Keyboard commenting on the internet does not thereby turn himself into a publisher entitled to press freedom. It takes more than would-be contributions of news, opinions, or commentary to practice publishing.
Joe Keyboard typically practices activities which put him in the class of authors and contributors. That kind of activity is protected by 1A Speech Freedom, which covers a lot, but which does not extend to giving would-be contributors a power to compel publishers to accept publications the publishers prefer not to publish.
If you think you would prefer to be a publisher instead of a contributor, then go ahead and do it. The Constitution will protect your efforts just as it does the efforts of the New York Times.
To become a publisher, assemble an audience from a public which is free to accept or ignore your offerings. Provide technical means to accomplish reliable dissemination of published content. Arrange for regular provision of content to keep your audience interested. Accept liability for defamation, if some false assertion you happen to publish damages a third party. Do the work necessary to curate your audience, particularly with an eye to providing means to monetize your activities, usually by the sale of advertising to companies which are willing to pay for access to the attention of the audience your activities have built and managed.
If government tries to interfere with any of those activities necessary to accomplish publishing—if you practice them within the broadly privileged scope the Constitution allows—feel free to laugh at the government. You are even free to publish articles mocking government officials for their effrontery. And in the extremely unlikely event that government attempts to censor otherwise lawful content you wish to publish, you can take government to court and you will win. Those are rights you too enjoy under the Constitution, whether or not you choose to use them.
Publishing consists merely of making material available to the public. There are almost always tons of intermediaries in doing so. A publisher need not cut down trees and pulp them to make paper; need not run their own printing presses; need not have their own delivery service for their books or papers; need not own their own cables; need not run their own servers.
The distinctions you're trying to make simply don't exist in law outside your own mind.
Also, wnoise, you missed the point of my objection to the 4-part test. Most activities with which government agencies interest themselves are not expressive activities. They have to do instead with issues from worker safety, to building codes, to public health, or to any of a host of other day-to-day occurrences.
Most such occurrences do not enjoy the special enumerated protection of a clause in the Constitution. Thus, a government agency in a posture to coerce compliance from a business violating rules against non-expressive activities is in a notably different legal posture than a government agency trying to coerce 1A-protected expressive freedoms.
A publisher who receives stern warnings from a government functionary is notably less threatened than the manager of a pharmacy which gets a government demand for better performance on recording sales of narcotic medicines. The 4-part test might prove helpful to alleviate the plight of an unfairly-accused pharmacy manager. But 1A expressive freedoms guarantee that a publisher has neither need nor use for an irrelevant point-by-point evaluation of government conduct which is already explicitly prohibited without regard for the tone, persistence, or other subjective aspects of government interactions.
That's, of course, entirely wrong. The entire point of the test is to evaluate whether there's state action, which is only relevant when we're talking about constitutional rights.
Or talking about separation of powers. Or about federalism. Or about the major questions doctrine. Or talking about limitations on federal agencies to act without explicit empowerment by legislation, etc.
All sorts of constraints on federal agency activities could be circumvented if the agencies were empowered to use arbitrary enforcement by issuing threats. Do you know the answer to the question I first posed, whether the 4-part test came out of a case involving alleged federal threats against a publisher? What was the case?
While I am solidly on the anti misinformation, or if you will, anti stupid bandwagon, the reasoning shown in the excerpts from the decision seem solid
The gov't should show restraint in all its communications
I think they were not wrong in meeting with social media companies and talking about things, expressing their opinion as it were.
But when it crosses over into any kind of threat, or something that can be construed as a threat, well, no , that is not right
The people doing the communication are, well, people, but they need to understand who they represent, and the power they represent.
Restraint
According to Techdirt the court has now withdrawn this order granting rehearing. Kinda weird.
There has been (far too much) disinformation about Covid and other essential topics published on social media.
Formally outsourcing (with still considerable government meddling) censorship to the social media is violating — if not in letter— the spirit of the First Amendment.
Nothing impedes the governmental agencies to recommend to the platforms to provide comments that the government deems a post disinformation.
Since the government must not force the social media companies to bear the extra costs of doing the due diligence in assessing egregiously false or misleading information, it would have to provide that themselves. Which ensues the question: are my fellow libertarians (but those of the property worshipping right) on the Conspiracy willing to provide the funds necessary for such due diligence, or would they — as usual — leave the field to those private actors who are better financed?
Turns out that the whole thing is a false alarm!
https://talkingpointsmemo.com/news/5th-circuit-withdraws-order-biden-social-media-clerk