The Volokh Conspiracy
Mostly law professors | Sometimes contrarian | Often libertarian | Always independent
Journal of Free Speech Law: "Carriage and Removal Requirements for Internet Platforms: What Taamneh Tells Us," by Daphne Keller
This just-published article is here; the Abstract:
The Supreme Court's 2023 Twitter v. Taamneh ruling arrived at a moment of potentially great change in the laws governing Internet platforms. The ruling, including Justice Thomas's factually dubious descriptions of platforms as passive and agnostic toward user content, will have major influence in two categories of future cases.
- Most obviously, Taamneh will affect cases in which plaintiffs seek to hold platforms liable for unlawful material posted by users.
- Less obviously, Taamneh is sure to affect "must-carry" claims or mandates for platforms to carry content against their will—including major cases on the Supreme Court's 2023–24 docket about laws in Texas and Florida.
This piece explains how Taamneh may be invoked in both kinds of cases, and explores the strengths and limitations of parties' likely arguments—in particular, arguments based on platforms' putative passivity. It also examines important intersections between must-carry claims and claims seeking to hold platforms liable for unlawful user content. It argues that ignoring those connections may cause cases in one area of law to distort outcomes in the other. Taamneh itself may already have done just that.
And an excerpt from the Introduction:
Efforts to regulate platforms in the U.S. are often stymied by a fundamental disagreement. Some voters and leaders—mostly Democratic—want platforms to remove more content, including "lawful but awful" material like medical misinformation or hate speech. Some—mostly Republicans—want platforms to remove less content. These same competing impulses appear in litigation. Some plaintiffs sue to prevent platforms from removing content, and others to hold them liable for failing to do so. Courts, unlike Congress, may soon be forced to resolve the tensions between these competing imperatives. Their decisions will almost certainly be shaped by a 2023 Supreme Court ruling, Twitter v. Taamneh.
Taamneh was one of two closely-linked cases about platforms' responsibility for user content. In it, the Court unanimously held that Twitter, Facebook, and YouTube were not liable for harms from ISIS attacks. In the second case, Gonzalez v. Google, the Court declined to decide whether the platforms were also immunized under the law known as Section 230. Litigation on other claims that would effectively hold platforms liable for failing to remove user speech—which I will call "must-remove" cases—continues in lower courts.
Questions about the opposite legal pressure, under so-called must-carry laws that prevent platforms from removing certain user speech, will be considered by the Court this Term in NetChoice v. Paxton and Moody v. NetChoice. Those cases challenge must-carry laws in Texas and Florida. Platforms argue that the laws violate their First Amendment rights, by stripping them of editorial discretion to set speech policies and remove harmful content.
Must-carry and must-remove claims generally come up in separate cases and are treated as unrelated by courts. But they are deeply intertwined. Taamneh, as the Court's first major holding on platforms and speech since the 1990s, will matter to both. This piece examines the connections between must-remove and must-carry claims through the lens of the tort law questions examined in Taamneh. It argues that Taamneh's emphasis on platform "passivity," which permeates the opinion, may have troubling ramifications. Indeed, Texas relied heavily on Taamneh in a brief supporting its must-carry law, arguing that platforms' passivity demonstrates that platforms have no First Amendment interest in setting editorial policies.
Legal disputes like the one in Taamneh, about whether plaintiffs have any cause of action against platforms in the first place, are relatively common in litigation. But academic and public policy discussion tends not to focus on these issues about the merits of plaintiffs' underlying claims. Instead, academics and policymakers often focus on the other two major sources of law governing platforms and speech: immunity statutes and the Constitution. Statutory immunities under laws like Section 230 have been the subject of noisy debates for a number of years, while constitutional questions rose to prominence more recently. It is clear that the First Amendment sets real limits on the laws that govern platforms. But it is less clear exactly what those limits are.
Basic liability standards, immunity standards, and First Amendment standards all combine to shape platform regulation…. A predictable consequence of laws that hold platforms liable or potentially liable for user speech is that platforms will, out of caution, remove both unlawful and lawful material in order to protect themselves. As the Supreme Court has noted in cases about movie theaters and video distributors, speech intermediaries can be chilled more easily than ordinary speakers and publishers. Platforms generally have less motivation to defend users' speech than users themselves do, and less ability to ascertain legally relevant facts. As recognized in a midcentury Supreme Court case about bookstores, Smith v. California, this can be a problem of constitutional dimension. Smith struck down a law that held bookstores strictly liable for obscene books, noting that such unbounded liability would foreseeably lead booksellers to err on the side of caution and "restrict the public's access to forms of the printed word which the State could not constitutionally suppress directly." The resulting "censorship affecting the whole public" would be attributable to state action, and "hardly less virulent for being privately administered."
In Taamneh, the Court steered well clear of examining these First Amendment issues by rejecting plaintiffs' claims under basic tort law. That doesn't always work, though—as at least one amicus brief pointed out, and as I will discuss here. Courts' resolutions of claims like the one in Taamneh can shape both platforms' editorial policies and users' opportunities to speak. As a result, free expression considerations can in turn play a structuring role when courts interpret the elements of such tort claims.
Editor's Note: We invite comments and request that they be civil and on-topic. We do not moderate or assume any responsibility for comments, which are owned by the readers who post them. Comments do not represent the views of Reason.com or Reason Foundation. We reserve the right to delete any comment for any reason at any time. Comments may only be edited within 5 minutes of posting. Report abuses.
Please
to post comments
As long as section 230 can be used to duck lawsuits, nothing should be removed that isn't accompanied by referral for criminal charges.
Sec 230 was passed to foreclose abusive lawsuits over non-controversial decisions about spam, malware and off-topic contributions. Malware is the only one of those three even potentially eligible for criminal charges but even that's not practical since it's almost never attributable.
Your proposed rule throws the baby out with the bathwater. Yes, there are some bad moderation decisions and yes, some of them are hiding behind Sec 230. But the vast majority of moderation decisions protected by Sec 230 are entirely non-controversial.
Keller writes as someone deeply informed about the twists, turns, complexities and bafflements of internet speech issues. But she apparently cannot see that the melange of conflicting imperatives she posits can never be untangled, because it offers no fundamental underlying principle to guide the necessary choices.
As with most critiques of internet speech issues, everything in Keller's essay is treated as ad hoc, unprecedented, and protean. In short, Keller's article is yet another—albeit better-informed-than-usual—instance of internet publishing utopianism.
As always, that kind of advocacy displays its singular field mark—that every remedy proposed will if implemented dismantle the practical means to deliver the result proposed. Keller's emphasis on tensions she catalogues shows that she at least somewhat understands that, but not well enough to steer away from her turbulent style of engagement.
Strikingly, Keller pays least attention to the part of the dilemma that would prove most productive to understand better—the role of internet platforms as 1A protected publishers with rights to publish at pleasure whatever they please, and to decline to publish anything else. That Keller again and again chooses to cast the entire debate in partisan political terms does not help her find her way to that ultimately more-practical consideration.
Here is a singular example, which ought to inform every aspect of this debate, but which will probably fade quickly in memory, and thus encourage a quick return to time- and effort-wasting utopianism. Elon Musk, by republishing this week one example of an anti-semitic trope he should have avoided, took a gigantic business hit among a group of his most-coveted advertising customers. In doing that, Musk actually put in question the future of his previously gigantic, but already shaky publishing business.
You know what will have the absolute worst effect on expressive freedom on the internet? Legal demands which impair press freedom will do that, if they deprive publishers smarter than Musk of ability to control published content at will. Absent that freedom, publishers will struggle to match business, content, and audience curation decisions to meet the needs of the markets they serve. Publishers who fail at that task go out of business, taking with them whatever expressive opportunities that success might have enabled.
Free content management is the principal means publishers have available to continue publishing long term, and thus continue to provide expressive outlet for the folks it pleases Keller to call, "users." They would more accurately be termed, "contributors." Or in other cases, "audience members."
Keller seems to prioritize expressive freedom as an ideal. She will be wiser to set that ideal aside for a bit, and focus instead on how to help organize the practicalities of a transformed internet publishing industry. That reorganization must involve a mix of new ideas, along with some surprisingly old-fashioned-looking ones.
Among the latter will be public policy to bring back purely private gate-keeping—to keep that task out of the hands of government. In short, there must be near universal private editing of everything, prior to publication, lest rival pressures to put government in the editing role force political decisions which favor some factions and disadvantage others.
A better internet publishing policy will also act to encourage a far more competitive marketplace, comprising literally tens of thousands of private publishers, and getting rid of internet giantism. Those private publishers will enjoy opportunity created by internet economies to take advantage of low barriers to entry.
Crucially, thousands of publishers will represent a plethora of competing opinion communities, and act to exploit as business opportunities every part of the political spectrum. Would-be contributors turned aside by one publisher should have little trouble finding another who is more receptive.
If all else fails, anyone should be free—and it should be reasonably practical with moderate expenditure—to start a new publishing business, and manage it to express the new publisher's own opinions, and those of like minded others. Or to publish opinions of those who disagree, according to the publisher's pleasure and sense of business advantage.
What I have described is essentially a much-simpler free market means to deal with essentially all of the dilemmas Keller lays out in such profusion. It is a means which offers the further advantage of being an evolutionary descendant of a previously successful publishing regime which for at least a century was broadly regarded as an ornament to civilization. We know it worked. With additional practical advantages supplied by internet efficiencies and economies, it ought to work even better now.
What about parts of the internet stack that aren't themselves publishers of anything, such as server hosting companies and ISPs?
ISPs should be common carriers; they're a semi-monopoly in any given location. You can, in a pinch, host your own content. You can't splice in your own connection to the Internet.
Daphne Keller is probably better informed about these issues than anybody. Definitely worth a read.
I don't agree with her on everything, but her opinions are always both well reasoned and based on facts.
There oral arguments of Twitter v. Taamneh and Gonzalez v. Google were amusing because no one in the court room actually understood any element of the operation of a social medium platform like Twitter or Google.
For example, if recommendations with links to third party content appear on a user's display, the recommendations are contained in a document that the social medium platform creates for the user. The document is a product and speech of the social medium platform.
That's one possibility. Another is that Martillo doesn't actually understand any element of the operation of a social media platform like Twitter or Google.
I doubt anyone who logs into Facebook or Twitter thinks these sites are in any way similar to the NY Times , CNN or WaPo sites. Facebook and Twitter (X) are not just giant "letters to the editor".