The Volokh Conspiracy
Mostly law professors | Sometimes contrarian | Often libertarian | Always independent
When May Law Require Social Media Platforms to Disclose Basis for Moderation Decisions?
From the majority in Moody v. Netchoice, LLC:
The laws, from Florida and Texas, restrict the ability of social-media platforms to control whether and how third-party posts are presented to other users … [including by] requir[ing] a platform to provide an individualized explanation to a user if it removes or alters her posts….
Analyzing whether these requirements are sound, the majority held, "means asking," as to each kind of content moderation decision, "whether the required disclosures unduly burden" the platforms' own expression:
[R]equirements of that kind violate the First Amendment if they unduly burden expressive activity. See Zauderer v. Office of Disciplinary Counsel of Supreme Court of Ohio (1985). So our explanation of why Facebook and YouTube are engaged in expression when they make content-moderation choices in their main feeds should inform the courts' further consideration of that issue.
For more on that "main feeds" question, and on the Court's not deciding the First Amendment questions raised by any of the platforms' other functions, see this post. As to the Zauderer "unduly burden expressive activity" standard, especially as applied outside the original Zauderer context of commercial advertising, see NIFLA v. Becerra (2018).
All this suggests that the individualized-explanation requirement are more likely to be invalid as to decisions about what to include in the "main feeds," and more likely to be valid as to decisions about whether to delete a post outright, or ban a user outright. But even that is not entirely clear. For a thoughtful, detailed treatment of the laws' practical effects (which is what the majority seems to be calling for), see Daphne Keller's Platform Transparency and the First Amendment article.
Justice Thomas, writing alone, argued in favor of greater protection against speech compulsions generally:
I think we should reconsider Zauderer and its progeny. "I am skeptical of the premise on which Zauderer rests—that, in the commercial speech context, the First Amendment interests implicated by disclosure requirements are substantially weaker than those at stake when speech is actually suppressed."
But he also joined Justice Alito's concurrence in the judgment (which also joined by Justice Gorsuch), which took a less platform-friendly approach. An excerpt:
NetChoice argues in passing that it cannot tell us how its members moderate content because doing so would embolden "malicious actors" and divulge "proprietary and closely held" information. But these harms are far from inevitable. Various platforms already make similar disclosures—both voluntarily and to comply with the European Union's Digital Services Act—yet the sky has not fallen. And on remand, NetChoice will have the opportunity to contest whether particular disclosures are necessary and whether any relevant materials should be filed under seal. Various NetChoice members already disclose in broad strokes how they use algorithms to curate content….
Just as NetChoice failed to make the showing necessary to demonstrate that the States' content-moderation provisions are facially unconstitutional, NetChoice's facial attacks on the individual-disclosure provisions also fell short. Those provisions require platforms to explain to affected users the basis of each content-censorship decision. Because these regulations provide for the disclosure of "purely factual and uncontroversial information," they must be reviewed under Zauderer's framework, which requires only that such laws be "reasonably related to the State's interest in preventing deception of consumers" and not "unduly burde[n]" speech.
For Zauderer purposes, a law is "unduly burdensome" if it threatens to "chil[l] protected commercial speech." Here, NetChoice claims that these disclosures have that effect and lead platforms to "conclude that the safe course is to … not exercis[e] editorial discretion at all" rather than explain why they remove "millions of posts per day." …
In the lower courts, NetChoice did not even try to show how these disclosure provisions chill each platform's speech. Instead, NetChoice merely identified one subset of one platform's content that would be affected by these laws: billions of nonconforming comments that YouTube removes each year. But if YouTube uses automated processes to flag and remove these comments, it is not clear why having to disclose the bases of those processes would chill YouTube's speech. And even if having to explain each removal decision would unduly burden YouTube's First Amendment rights, the same does not necessarily follow with regard to all of NetChoice's members.
NetChoice's failure to make this broader showing is especially problematic since NetChoice does not dispute the States' assertion that many platforms already provide a notice-and-appeal process for their removal decisions. In fact, some have even advocated for such disclosure requirements. Before its change in ownership, the previous Chief Executive Officer of the platform now known as X went as far as to say that "all companies" should be required to explain censorship decisions and "provide a straightforward process to appeal decisions made by humans or algorithms." Moreover, as mentioned, many platforms are already providing similar disclosures pursuant to the European Union's Digital Services Act. Yet complying with that law does not appear to have unduly burdened each platform's speech in those countries. On remand, the courts might consider whether compliance with EU law chilled the platforms' speech….
Editor's Note: We invite comments and request that they be civil and on-topic. We do not moderate or assume any responsibility for comments, which are owned by the readers who post them. Comments do not represent the views of Reason.com or Reason Foundation. We reserve the right to delete any comment for any reason at any time. Comments may only be edited within 5 minutes of posting. Report abuses.
Please
to post comments
I agree with Justice Jackson here. The soundest approach to take would have been to uphold these laws against facial challenge on grounds that a very substantial amount of the conduct proscribed is constitutionally proscribable, and then let the plaintiffs pursue as-applied challenges for specific features and issues.
I also agree with Justice Alito that the Court should not have accepted at face value the social media companies’ claims that they are in the business of “curating content” and that they are functionally no different from a newspaper. Rather, I think this issue should have been subjected to challenge and proof.
Finally, the majority opinion does suggest one possible outcome. It suggests that social media companies may be able to remove posts from news feeds that they themselves create, but can neither remove posts entirely so as to prevent users from finding them if they themselves search for them, nor ban users entirely. That is, a sharp distinction needs to made between the raw original user-created content, which law can provide belongs to and is the speech of the users, and the derived product that the social media companies make. The social media companies may he able to control the derived product but not the original raw content, so that if people want to find posts the conpany does not like, the company cannot prevent them from finding them. Users might choose to simply not pay attention to the company’s derived product if they don’t want it, and do their own searches for raw posts to get the content they want.
Everyone agrees that's true, and it has no bearing on any issue in the case.
They're in the business of paid advertising views. Same as a newspaper or TV show.
Their curation is from fear of laws, social ostracism, i.e. advertisers fleeing, and avoiding the powerful of both parties crushing section 230 if they don't "voluntarily curate" according to those officials' desires.
Suppose a user spams "MAKE MONEY AT HOME" or "VIAGR@ REALLY WORKS" spam, or just v̴̗̮̠̼̦́̀̅̈̏̄ï̸̢̲̫͙̼̯̬̆̏s̷̙̤̖̜̞̬̈̊͗̿u̸̘̙̎́̒̀̾̏͘á̸͖̋̑͋͛̒̄͘͝l̴͓͉̥̘̠͉͇̉̂͂̐̓l̷̖͍̀̀͊̌̆̆̅͗͊ͅy̷̧̓̾̓̀ ̵̛̘̼͆̄̈́̉̓̎̆̈͝ǒ̵̢̦̯̩̈́̏̈́͌̅̕f̷̺̯̠̮͈̻͌̏̉̍̀̓̉͜͠f̴̧̧̛͉̥͈̥̀̈̏̈͜͝ͅḙ̵̛̟̱͇̞͈̲̥̉̇͂̋͝͝n̶̢̖͇̣̑̈́s̵͚̞̺̺͖͕͂͗̽͌̍ì̸̢̧̯̜͚̘̩̫̺̀v̷̡͎̭̭̟͈̲̘͛̂͆̈́͜͜͝͝e̸̛̳̺̤̽ ̶̱̗̩́̑̈́̅̄͜t̷̢̺̩̝̝͛͑̑͘e̵̡̡̫̭̪̊̏̈́̐̀͠x̷̛̖͙͇̐̎̓̿ţ̸̮̣̩͚̔̎̒. It seems fairly trivial to note that almost all websites would prefer to ban this user because they are a nuisance that makes the site worse for everyone else. Any site that doesn't ban this kind of shit gets sunk by it. So websites should have the ability to ban posters. Perhaps you argue, okay, but these bans would be in some sense viewpoint neutral. I'm not sure advocacy of a particular medical view or views about core entrepreneurship are not viewpoint-impacted, but let's assume some model law that exempts this kind of stuff from protection.
Now suppose someone spams, in the exact same manner, something that is politics adjacent. Say it's "BONG HITS 4 JESUS", or if you don't think that's political enough to be political, say it's just "LOCK HER UP!" spammed repeatedly. Let's say the user in question makes a spamming robot that automatically posts the post thousands of time in response to every single post, whether it's on a blog or on a social media site. It joins every single Facebook group, it posts on every single hashtag, it replies to every single public post. Now let's say this person registers 100,000 accounts instead of 1.
The idea that the platform _must_ host this content -- even if they can turn off display in certain feed related contexts -- is nuts. Clearly platforms must be allowed to ban people, including people expressing ideas that can at least masquerade as political, core 1A speech.
To me the question then becomes not "should platforms be able to ban speech that is politically impacted?" (a clear yes) and rather "should we carve out any kinds of protections for particular types of speech?". I feel like this is something probably better done through a consent decree rather than a particular constitutional ruling.
Repeated posting of essentially the same content, particularly when it is in unrelated threads? Not protected.
Spam that is commercial? Not protected - however, a content provider MIGHT decide to agree to allow it, IF they PAY for the privilege, as a form of advertising.
Threats? Not protected.
Comments that hurt your fee-fees? Protected. The recipient might, at their discretion, choose to hide that poster's comments, without penalty.
Appeal from bans? Absolutely. These original bans MUST provide a reason, if asked, to the user.
To be fair to the commercial sites, MANY of these "I've been Banned!" users are posting indiscriminately, hostile, and deliberately place the content in question in threads/forums where they KNOW it will get pushback. One-time offense, that MIGHT have been accidental? Appeal should win, on the proviso that it is not repeated.
Where the media site might have a problem is when they deliberately insert censors with a specific bias against one type of viewpoint (Christian, Gay, Muslim, Pro-Israeli). That lends weight to the idea that this is not just about spam.
I'm talking about social media sites that are generally open to all the public. If a site limits their reach to a specific group/theology/philosophy, they may be exempted from complaints from people not in their wheelhouse. That would be the equivalent of a private club having to take in anyone.