N.Y. Appellate Court Rejects Addictive Design Theory in Lawsuit Against Social Media Defendants Over Buffalo Shootings
[UPDATE: A New York lawyer writes that the plaintiffs will be entitled to have the case heard by New York's highest court, if they so wish (as I assume they would): "I wanted to note that this case appears to be an automatic appeal as of right to the NY Court of Appeals due to the two dissenters. CPLR 5601(a). The decision seemed to indicate that the case is fully dismissed, which is the requirement for finality in NY. CPLR 5611."]
An excerpt from Patterson v. Meta Platforms, Inc., decided Friday by a panel of the New York intermediate appellate court, in an opinion by Judge Stephen Lindley joined by Judges John Curran and Nancy Smith:
These consolidated appeals arise from four separate actions commenced in response to the mass shooting on May 14, 2022 at a grocery store in a predominately Black neighborhood in Buffalo. The shooter, a teenager from the Southern Tier of New York, spent months planning the attack and was motivated by the Great Replacement Theory, which posits that white populations in Western countries are being deliberately replaced by non-white immigrants and people of color….
[S]urvivors of the attack and family members of the victims … [sued various parties, including] the so-called "social media defendants," i.e., [the companies responsible for Facebook, Instagram, Snap, Google, YouTube, Discord, Reddit, Twitch, Amazon, and 4chan], all of whom have social media platforms that were used by the shooter at some point before or during the attack…. According to plaintiffs, the social media platforms in question are defectively designed to include content-recommendation algorithms that fed a steady stream of racist and violent content to the shooter, who over time became motivated to kill Black people.
Plaintiffs further allege that the content-recommendation algorithms addicted the shooter to the social media defendants' platforms, resulting in his isolation and radicalization, and that the platforms were designed to stimulate engagement by exploiting the neurological vulnerabilities of users like the shooter and thereby maximize profits…. According to plaintiffs, the addictive features of the social media platforms include "badges," "streaks," "trophies," and "emojis" given to frequent users, thereby fueling engagement. The shooter's addiction to those platforms, the theory goes, ultimately caused him to commit mass murder….
Plaintiffs concede that, despite its abhorrent nature, the racist content consumed by the shooter on the Internet is constitutionally protected speech under the First Amendment, and that the social media defendants cannot be held liable for publishing such content. Plaintiffs further concede that, pursuant to section 230, the social media defendants cannot be held liable merely because the shooter was motivated by racist and violent third-party content published on their platforms. According to plaintiffs, however, the social media defendants are not entitled to protection under section 230 because the complaints seek to hold them liable as product designers, not as publishers of third-party content.
The majority concluded that section 230 immunity protects the defendants against the plaintiffs' claims:
Section 230 provides, in pertinent part, that "[n]o provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider." … "By its plain language, [section 230] creates a federal immunity to any cause of action that would make service providers liable for information originating with a third-party user of the service." …
Based on our reading of the complaints, we conclude that plaintiffs seek to hold the social media defendants liable as publishers of third-party content. We further conclude that the content-recommendation algorithms used by some of the social media defendants do not deprive those defendants of their status as publishers of third-party content. It follows that plaintiffs' tort causes of action against the social media defendants are barred by section 230….
If content-recommendation algorithms transform third-party content into first-party content, … then Internet service providers using content-recommendation algorithms (including Facebook, Instagram, YouTube, TikTok, Google, and X) would be subject to liability for every defamatory statement made by third parties on their platforms. That would be contrary to the express purpose of section 230, which was to legislatively overrule Stratton Oakmont, Inc. v Prodigy Servs. Co. (N.Y. trial ct. 1995), where "an Internet service provider was found liable for defamatory statements posted by third parties because it had voluntarily screened and edited some offensive content, and so was considered a 'publisher.'" …
In any event, even if we were to … conclude that the social media defendants engaged in first-party speech by recommending to the shooter racist content posted by third parties, it stands to reason that such speech ("expressive activity" as described by the Third Circuit) is protected by the First Amendment under Moody v. Netchoice Inc. (2024).…
In the broader context, the dissenters accept plaintiffs' assertion that these actions are about the shooter's "addiction" to social media platforms, wholly unrelated to third-party speech or content. We come to a different conclusion. As we read them, the complaints, from beginning to end, explicitly seek to hold the social media defendants liable for the racist and violent content displayed to the shooter on the various social media platforms. Plaintiffs do not allege, and could not plausibly allege, that the shooter would have murdered Black people had he become addicted to anodyne content, such as cooking tutorials or cat videos. {It cannot reasonably be concluded that the allegedly addictive features of the social media platforms (regardless of content) caused the shooter to commit mass murder, especially considering the intervening criminal acts by the shooter, which were … "not foreseeable in the normal course of events" and therefore broke the causal chain.}
Instead, plaintiffs' theory of harm rests on the premise that the platforms of the social media defendants were defectively designed because they failed to filter, prioritize, or label content in a manner that would have prevented the shooter's radicalization. Given that plaintiffs' allegations depend on the content of the material the shooter consumed on the Internet, their tort causes of action against the social media defendants are "inextricably intertwined" with the social media defendants' role as publishers of third-party content…. It was the shooter's addiction to white supremacy content, not to social media in general, that allegedly caused him to become radicalized and violent….
Judges Tracey Bannister and Henry Nowak dissented; an excerpt:
"[W]hy do I always have trouble putting my phone down at night? … It's 2 in the morning … I should be sleeping … I'm a literal addict to my phone[.] I can't stop cons[u]ming." These are the words of a teenager who, on May 14, 2022, drove more than 200 miles to Buffalo to shoot and kill 10 people and injure three more at a grocery store in the heart of a predominantly Black community.
Plaintiffs in these consolidated appeals allege that the shooter did so only after years of exposure to the online platforms of the so-called "social media defendants"—… platforms that, according to plaintiffs, were defectively designed. Plaintiffs allege that defendants intentionally designed their platforms to be addictive, failed to provide basic safeguards for those most susceptible to addiction—minors—and failed to warn the public of the risk of addiction. According to plaintiffs, defendants' platforms did precisely what they were designed to do—they targeted and addicted minor users to maximize their engagement. Plaintiffs allege that the shooter became more isolated and reclusive as a result of his social media use and addiction, and that his addiction, combined with his age and gender, left him particularly susceptible to radicalization and violence—culminating in the tragedy in Buffalo….
[W]e reject the foundation upon which the majority's opinion is built—that plaintiffs' causes of action necessarily seek to hold defendants responsible for radicalizing the shooter given their status "as the publisher[s] or speaker[s] of any information provided by another information content provider," i.e., that plaintiffs only seek to hold defendants liable for the third-party content the shooter viewed. If that were the only allegation raised by plaintiffs, we would agree with the majority. But it is not.
The operative complaints … also allege that defendants' platforms are "products" subject to strict products liability that are addictive—not based upon the third-party content they show but because of the inherent nature of their design. Specifically, plaintiffs allege that defendants' platforms: "prey upon young users' desire for validation and need for social comparison," "lack effective mechanisms … to restrict minors' usage of the product," have "inadequate parental controls" and age verification tools that facilitate unfettered usage of the products, and "intentionally place[ ] obstacles to discourage cessation" of the applications. Plaintiffs allege that the various platforms "send push notifications and messages throughout the night, prompting children to re-engage with the apps when they should be sleeping." They further allege that certain products "autoplay" video without requiring the user to affirmatively click on the next video, while others permit the user to "infinite[ly]" scroll, creating a constant stream of media that is difficult to close or leave.
Plaintiffs assert that defendants had a duty to warn the public at large and, in particular, minor users of their platforms and their parents, of the addictive nature of the platforms. They thus claim that defendants could have utilized reasonable alternate designs, including: eliminating "autoplay" features or creating a "beginning and end to a user's '[f]eed'" to prevent a user from being able to "infinite[ly]" scroll; providing options for users to self-limit time used on a platform; providing effective parental controls; utilizing session time notifications or otherwise removing push notifications that lure the user to re-engage with the application; and "[r]emoving barriers to the deactivation and deletion of accounts." These allegations do not seek to hold defendants liable for any third-party content; rather, they seek to hold defendants liable for failing to provide basic safeguards to reasonably limit the addictive features of their social media platforms, particularly with respect to minor users….
The conduct at issue in this case is far from any editorial or publishing decision; defendants utilize functions, such as machine learning algorithms, to push specific content on specific individuals based upon what is most apt to keep those specific users on the platform. Some receive cooking videos or videos of puppies, while others receive white nationalist vitriol, each group entirely ignorant of the content foisted upon the other. Such conduct does not "maintain the robust nature of Internet communication" or "preserve the vibrant and competitive free market that presently exists for the Internet" contemplated by the protections of immunity but, rather, only serves to further silo, divide and isolate end users by force-feeding them specific, curated content designed to maximize engagement.
The majority concludes, based upon Moody, that even if plaintiffs seek to hold defendants liable for their own first-party content, such conduct is protected by the First Amendment. We disagree…. Government-imposed content moderation laws that specifically prohibit social media companies from exercising their right to engage in content moderation is a far cry from private citizens seeking to hold private actors responsible for their defective products in tort.
Such a vast expansion of First Amendment jurisprudence cannot be overstated. Taken to its furthest extent, the majority essentially concludes that every defendant would be immune from all state law tort claims involving speech or expressive activity. If the majority is correct, there could never be state tort liability for failing to warn of the potential risks associated with a product, for insisting upon a warning would be state-compelled speech in violation of the First Amendment. Nor could there ever be liability for failing to obtain a patient's informed consent in a medical malpractice action—for the defendant physician's explanation of the procedure, its alternatives, and the reasonably foreseeable risks and benefits of each proposed course of action—necessarily implicates the defendant physician's First Amendment rights. That simply cannot be the case.
My sense is that the majority has it right: The plaintiffs' theory can't be just that social media has addictive features that cause harm apart from its content (e.g., because it tempts people away from sleeping). Rather, it's that social media has features that help promote harmful but constitutionally protected speech, whether it's "white nationalist vitriol" or more broadly speech that "silo[s], divide[s] and isolate[s] end users." Moreover, holding a company liable for providing "content designed to maximize engagement" is indeed holding it liable as a publisher of that third-party content, even when the content is "specific" and "curated." But in any event, this struck me as an interesting an important case, both because it's an appellate precedent (albeit from an intermediate state appellate court) and because of the 3-2 split among the judges.
UPDATE: See also this detailed post from Prof. Eric Goldman (Technology & Marketing Law Blog).