The Volokh Conspiracy
Mostly law professors | Sometimes contrarian | Often libertarian | Always independent
California Wants to Hold Social Media Platforms Liable for User Posts Containing Bias- and Political-Hostility-Motivated Threats
California SB 771, which is now on Governor Newsom's desk for signature, would add a new statute that provides the following (some structure added):
(a) A social media platform [that has >$100M in annual revenues] that
- violates [California Civil Code] Section 51.7, 51.9, 52, or 52.1 through its algorithms that relay content to users
- or aids, abets, acts in concert, or conspires in a violation of any of those sections,
- or is a joint tortfeasor in a violation of any of those sections,
shall … be liable to a prevailing plaintiff for a civil penalty for each violation sufficient to deter future violations but not to exceed [$1M for knowing violations, and $500K for reckless violations, potentially doubled if the platform knew, or should have known, that the plaintiff was a minor].
(b) (1) For purposes of this section, deploying an algorithm that relays content to users may be considered to be an act of the platform independent from the message of the content relayed.
(2) A platform shall be deemed to have actual knowledge of the operations of its own algorithms, including how and under what circumstances its algorithms deliver content to some users but not to others.
To explain (with some oversimplification) the statutory cross-references,
- Section 51.7 bans violence and threats of violence based on actual or perceived political affiliation, position in a labor dispute, or race, religion, immigration status, etc.
- Section 51.9 bans sexual harassment in a wide range of business relationships.
- Section 52 imposes liability for violations of 51.7, of 51.9, and of California bans on discrimination in places of public accommodation, and discrimination and boycotts by businesses.
- Section 52.1 bans interfering ("whether or not under color of law") by threat, intimidation, or coercion with the exercise of any constitutional rights (including free speech rights).
The legislature's background findings, from section 1 of SB 771, seem to suggest the legislature is concerned specifically about "targeted threats, violence, and coercive harassment, particularly when directed at historically marginalized groups," "especially … in light of rising incidents of hate-motivated harm, as documented across the state":
- [H]ate crimes involving anti-immigrant slurs increased by 31 percent ….
- [A] 400-percent rise in anti-LGBTQ+ disinformation and harmful rhetoric on major social media platforms.
- [A]nti-Jewish bias events rose by 52.9 percent and anti-Islamic bias events rose by 62 percent in 2023.
- Paid advertisements promoting violence against women, including language calling for beatings and killings, [have been] successfully placed and distributed on major social media platforms.
The legislature adds that "[t]he purpose of this act is not to regulate speech or viewpoint but to clarify that social media platforms, like all other businesses, may not knowingly use their systems to promote, facilitate, or contribute to conduct that violates state civil rights laws."
Now the law of course already bans aiding and abetting criminal or tortious behavior. But, as the Supreme Court concluded with regard to federal law in Twitter, Inc. v. Taamneh (2023), such liability generally requires some special steps on the defendant's part to aid the illegal actions. In particular, the Court rejected an aiding and abetting claim based on Twitter's knowingly hosting ISIS material and its algorithm supposedly promoting it, because Twitter didn't give ISIS any special treatment:
- "ISIS was able to upload content to the platforms and connect with third parties, just like everyone else."
- "[D]efendants' recommendation algorithms matched ISIS-related content to users most likely to be interested in that content—again, just like any other content."
- "All the content on [the] platforms is filtered through these algorithms, which allegedly sort the content by information and inputs provided by users and found in the content itself. As presented here, the algorithms appear agnostic as to the nature of the content, matching any content (including ISIS' content) with any user who is more likely to view that content."
- "[T]here are no allegations that defendants treated ISIS any differently from anyone else."
But the new California law seems to intentionally set forth aiding-and-abetting liability under California law that goes well beyond what Taamneh recognized under federal law. Coupled with the new statute's subsection (b)(2)—"A platform shall be deemed to have actual knowledge of the operations of its own algorithms, including how and under what circumstances its algorithms deliver content to some users but not to others"—the knowledge element required under the existing California tort law of aiding and abetting will often be satisfied.
Say a platform's algorithm delivers content to users that contains threats that are based on political affiliation, race, religion, sexual orientation, etc., just because users have shown an interest in the content (not because of any purposeful desire to promote such threatening content in general). The platform may be liable, on the theory that it is "deemed to have actual knowledge" of what its algorithms do. Likewise if the posts contain threats aimed at interfering with free speech, free exercise of religion, and other rights. And of course if platforms are required (on pain of liability) to take down illegal threats, they will likely also take down other material that they're worried might be seen as threatening by a future plaintiff, judge, and jury.
I'm pretty sure that such liability will be precluded by 47 U.S.C. § 230. Courts have held that, under § 230, online providers are immune from liability for speech posted by their users, even under an aiding and abetting theory, unless they deliberately craft their sites to help promote illegal conduct. Here's an example, from Wozniak v. YouTube, LLC (Cal. App. 2024) (yes, Apple co-founder Steve Wozniak):
Here, plaintiffs have not alleged that defendants undertook any … acts to actively and specifically aid the illegal behavior. Instead, they allege only that YouTube's neutral algorithm results in recommending the scam videos to certain targeted users. For instance, the [Complaint] alleges that "YouTube's state-of-the-art algorithm tailors its recommended videos to its users based on a variety of personal information and data that YOUTUBE and GOOGLE collect about their users, including 'clicks, watch time, likes/dislikes, comments, freshness, and upload frequency.'" There is no allegation that YouTube has done anything more than develop and use a content-neutral algorithm.
Courts have consistently held that such neutral tools do not take an interactive computer service outside the scope of section 230 immunity. In Dyroff v. Ultimate Software Group, Inc. (9th Cir. 2019), for instance, the plaintiff was the family of a man who had died after using fentanyl-laced heroin, which he had acquired following communications on defendant's online messaging board. The plaintiff contended the messaging board created content because it "used features and functions, including algorithms, to analyze user posts … and recommend other user groups." The Ninth Circuit rejected the argument, holding that "[t]hese functions—recommendations and notifications—[were] tools meant to facilitate the communication and content of others," and were "not content in and of themselves."
The online message board employed neutral tools similar to the ones challenged by plaintiffs here, and there is no allegation that the algorithms treat the scam content differently than any other third party content. (Ibid.; see also Gonzalez, supra, 2 F.4th at p. 896 ["a website's use of content-neutral algorithms, without more, does not expose it to liability for content posted by a third party"]; Roommates, supra, 521 F.3d at p. 1171 [website not transformed into content creator by virtue of supplying neutral tools that deliver content in response to user inputs]; cf. Liapes, supra, 95 Cal.App.5th at p. 929 [Facebook's tools were not neutral—rather than merely proliferate and disseminate content as a publisher, they created, shaped, and developed content by requiring users to provide information used to contribute to discriminatory unlawfulness].)
The last-cited case, Liapes, helps illustrate where § 230 immunity might not apply: There, Facebook was held potentially liable for discrimination because "It designed and created an advertising system, including the Audience Selection tool, that allowed insurance companies to target their ads based on certain characteristics, such as gender and age." But absent such specific design decisions that promote forbidden discriminatory advertising, § 230 prohibits liability that's based on simply deploying a "neutral algorithm [that] results in recommending [certain material] to certain targeted users" "based on a variety of personal information and data that" the platform collects about users, "including 'clicks, watch time, likes/dislikes, comments, freshness, and upload frequency.'"
So if I'm right, § 230 will preclude California law from imposing such liability on platforms. But it does appear that the California Legislature wants to impose that liability.
Editor's Note: We invite comments and request that they be civil and on-topic. We do not moderate or assume any responsibility for comments, which are owned by the readers who post them. Comments do not represent the views of Reason.com or Reason Foundation. We reserve the right to delete any comment for any reason at any time. Comments may only be edited within 5 minutes of posting. Report abuses.
Please
to post comments
"neutral algorithm"
Are any modern social media algorithms neutral?
SELECT * from USER_POSTS where POSTER_ID in (SELECT FOLLOWER_ID from FOLLOWING where id = :current_user)
ORDER_BY POST_TS desc;
Is about as neutral as you get, and depending on the platform you can get that or something close.
The thing is, most people don't respond to that as readily as the more sophisticated algorithms.
And you clowns are worried about Trump.
Our minds are so vast that they can worry about more than one thing at a time!
Back in the dark ages, when I was employed, I had a sign in my office; "Do not undertake vast projects with half-vast management"
(for some, my next review mumbled on about an attitude problem)
.
Did they instruct you to prepare some jokes about bonus being two words?
Wait, I thought California was all about free speech this week?
No, no you didn't.
I can't keep up. One day they're screaming about how their free speech is being attacked, the next day they're attacking other's free speech.
\0/
The OP describes a terrible proposal. Like every other proposal to conditionally regulate publication content, it is overt state censorship. No matter the harm being done to the public life of the nation by current internet publishing practices, authorization of state censorship by law is worse.
Also, there is a former solution which is proven, and will work better if renewed. Get rid of Section 230 without condition. That will return publishing to a legal regime of private editing prior to publication. To do that would reestablish publishing as a diversified activity on the basis of both business models and content choices, all administered by a myriad of private-market decision makers.
Experience with that style of publishing regime prior to the internet proved it was proof against government censorship. Leaving the law aside, the practical problem to censor so many private publishers defeated almost all government impulses even to try. Where such efforts were made, small numbers of cases made it easy for courts to strike them down, and decide instead on behalf of expressive freedom.
That system worked splendidly for many decades. It made American private publishing the envy of the world. It made expressive freedom an ornament of American civilization.
The challenge now is to think constructively about ways to return private management of media to a broadly diversified and mutually competitive marketplace—but this time by taking maximal advantage of the cost savings and democratizing power of the internet.
Reflexive rejection of that challenge by internet fans without backgrounds to think systematically about publishing remains a political obstacle. Nothing constructive can happen while that obstacle stands immovable. Internet utopians understand neither the impracticality nor the expressive limitations of the solutions they demand, and those utopians represent one of the few genuinely multi-partisan political blocs left in the nation,
I remain pessimistic about prospects for near-term improvement. Maybe it will take still more frustration with California-style censorship schemes to change the political climate. Right wingers especially would do well to imagine a US media future as consolidated, and as online, and as unified by a Trump/MAGA style national regime as presently. But in this case a regime of left-wing media oligarchs, just as bent on national media control as Trump/MAGA is now.
That is the real future of US media—whether left or right—unless dispersed and diversified private control of the public life of the nation is restored. The first step to do that is to get rid of Section 230, unconditionally.
I lived in California for most of my 20s and I have incredible affection for it.
But it is a single party state and very silly at times. This does seem the kind of insta-fail thing they'd try.
Anyhow, I thought that the sections incorporated by reference seem to be 1A problems as well:
Here's the gists I found:
51.7 - "(b)(1) All persons within the jurisdiction of this state have the right to be free from any violence, or intimidation by threat of violence, committed against their persons or property because of political affiliation, or on account of any characteristic"
- (2) For purposes of this subdivision, “intimidation by threat of violence” includes, but is not limited to, making or threatening to make a claim or report to a peace officer or law enforcement agency that falsely alleges that another person has engaged in unlawful activity or in an activity that requires law enforcement intervention, knowing that the claim or report is false, or with reckless disregard for the truth or falsity of the claim or report....
51.9 - "(a) A person is liable in a cause of action for sexual harassment under this section when the plaintiff proves all of the following elements:
(1) There is a business, service, or professional relationship between the plaintiff and defendant...."
(2) The defendant has made sexual advances, solicitations, sexual requests, demands for sexual compliance by the plaintiff, or engaged in other verbal, visual, or physical conduct of a sexual nature or of a hostile nature based on gender, that were unwelcome and pervasive or severe.
52 - "(a) Whoever denies, aids or incites a denial, or makes any discrimination or distinction contrary to Section 51, 51.5, or 51.6, is liable for each and every offense for the actual damages, and any amount that may be determined by a jury."
52.1 - "(b) If a person or persons, whether or not acting under color of law, interferes by threat, intimidation, or coercion, or attempts to interfere by threat, intimidation, or coercion, with the exercise or enjoyment by any individual or individuals of rights secured by the Constitution or laws of the United States, or of the rights secured by the Constitution or laws of this state, the Attorney General, or any district attorney or city attorney may bring a civil action for injunctive and other appropriate equitable relief in the name of the people of the State of California, in order to protect the peaceable exercise or enjoyment of the right or rights secured."
Well, it doesn't seem to be as poorly conceived as the Florida or Texas laws, but I haven't had a chance to digest it in detail, so it may well be just as bad.
My initial response is that it undermines Sec 230, and not in a good way. Perhaps someone can explain why it's not undesirable governmental restrictions on speech, but I'm not going to twist myself into a pretzel to support this kind of legislation simply because it comes from the left instead of the right.
Absent Section 230, is there a First Amendment argument against this law?
In Moody v. NetChoice the Court strongly implied social media platforms have a First Amendment right to curate and present speech. Of course, there is no right to curate and present speech that is not protected by the First Amendment. But in Eugene's hypothetical, a neutral algorithm curated and presented unprotected speech. Can the platform provider tweak its algorithm to screen out unprotected speech? If not, perhaps the law is overbroad, chilling the curation and presentation of protected speech?
The line between protected and unprotected speech is not one well suited to algorithms. It's too context-dependent.
You've got obscenity (dependent on community standards), defamation (dependent on the actual truth or falsity of the statement, and often on what the speaker subjectively knew), fraud (very intent-dependent), incitement (dependent upon whether the speech is intended to produce imminent lawless action and is likely to produce such action), and speech integral to criminal conduct (dependent upon the entirety of multiple criminal codes.)
I mean, some idiots on some random website start talking about feeding people into woodchippers, and even the feds can't seem to make an accurate determination of whether that counts as a true threat. You think some algorithm will know?
Hey! We weren't on a 'random' website;)
I expect the boycott of California to start posthaste, and for Jon Stewart, John Oliver, and South Park to rip into the state and Gavin Newsom (if he signs it).
Just kidding!