'Subway Surfing' Death Suit Against TikTok, Meta Further Chips Away at Section 230
Norma Nazario blames her son's death on social media algorithms.
"This case illustrates how the Section 230 precedent is fading, as courts keep chipping away at its edges to reach counterintuitive conclusions that should be clearly covered by Section 230," writes law professor and First Amendment expert Eric Goldman on his Technology and Marketing Law Blog.
The case in question—Nazario v. Bytedance Ltd.—involves a tragedy turned into a cudgel against tech companies and free speech.
It was brought by Norma Nazario, a woman whose son died while "subway surfing"—that is, climbing on top of a moving subway train. She argues that her son, 15-year-old Zackery, and his girlfriend only did such a reckless thing because the boy "had
become addicted to" TikTok and Instagram and these apps had encouraged him to hop atop a subway car by showing him subway surfing videos.
You are reading Sex & Tech, from Elizabeth Nolan Brown. Get more of Elizabeth's sex, tech, bodily autonomy, law, and online culture coverage.
Nazario is suing TikTok, its parent company (Bytedance), Instagram parent-company Meta, the Metropolitan Transit Authority, and the New York City Transit Authority, in a New York state court, with claims ranging from product liability and negligence to intentional infliction of emotional distress, unjust enrichment, and wrongful death. The social media defendants filed a motion to dismiss the case, which the court recently granted in part and rejected in part.
Looking for Someone To Blame
Cases like these are now, sadly, common, and always somewhat difficult to discuss. I feel deep sympathy for Nazario and any parent who loses a child. And it's understandable that such parents might be eager for someone to blame.
But teenagers doing dangerous, reckless things is not some new and internet-created phenomenon. And the fact that a particular dangerous or reckless thing might be showcased on social media platforms doesn't mean social media platforms caused or should be held liable for their death. We don't blame bookstores, or movie theaters, or streaming platforms if someone dies doing something they read about in a book or witnessed in a movie or TV show.
Alas, the involvement of tech companies and social media often overrides people's normal sense of how things should work.
We can generally recognize that if someone harms themselves doing a dangerous stunt they saw in a movie, the movie theater or streaming service where they saw that movie should not be punished, even if it promoted the movie to the person harmed. But throw around words like "algorithms" and some people—even judges—will act as if this changes everything.
Enter Section 230
Typically, online platforms—including TikTok and Instagram—are protected from much liability for content created by their users.
Section 230 of the Communications Decency Act says that interactive computer services and their users are legally responsible for their own speech, in the form of content that they create in whole or part, but not responsible for the speech of third parties. Sounds simple, right?
But trying to define—and whittle away at—this simple distinction has become a hallmark of lawsuits and legislation aimed at technology companies. Lawyers, activists, and the people they represent are constantly arguing that even when tech companies do not create offending or dangerous content, they are exempt from Section 230 protection for some reason involving product design or functionality or engaging in traditional editorial functions (such as content moderation).
The social media companies in this case argue that they are indeed protected by Section 230, since the subway surfing content viewed by Zackery Nazario was not created by TikTok or Meta but by third-party users of these platforms.
Nazario's suit, in turn, argues that Section 230 doesn't matter or doesn't apply here because this is not about TikTok's and Meta's roles as platforms for third-party speech. It's about their role as product manufacturers who have designed an unsafe product and used "algorithms [which] directed [Zackery]—unsolicited—to increasingly extreme and dangerous content."
Nazario's suit also argues that the tech platforms are co-creators of the subway surfing videos her son watched, since they provided users with tools to edit or modify their videos. And, as co-creators, they would not be protected by Section 230.
TikTok, Instagram Not Co-Creators, But…
The court did not entirely buy Nazario's arguments. It rejected the idea that TikTok and Instagram are co-creators of subway surfing videos just because they "make features available to users to personalize their content and make it more engaging."
It is TikTok and Instagram users, not the companies, that select "what features to add to their posts, if any" and "the social media defendants did not make any editorial decisions in the subway surfing content; the user, alone, personalizes their own posts," the court held. "Therefore, the social media defendants have not 'materially contributed' to the development of the content such that they may be considered co-creators."
So far, so good.
But the court was sympathetic to Nazario's argument that using algorithms changes things, despite "extensive precedent rejecting this workaround," as Goldman put it.
Here's what the court said:
Plaintiff's claims, therefore, are not based on the social media defendants' mere display of popular or user-solicited third-party content, but on their alleged active choice to inundate Zackery with content he did not seek involving dangerous "challenges." Plaintiff alleges that this content was purposefully fed to Zackery because of his age, as such content is popular with younger audiences and keeps them on the social media defendants' applications for longer, and not because of any user inputs that indicated he was interested in seeing such content. Thus, based on the allegations in the complaint, which must be accepted as true on a motion to dismiss, it is plausible that the social media defendants' role exceeded that of neutral assistance in promoting content, and constituted active identification of users who would be most impacted by the content.
Losing the Plot?
It's important to note the court is not agreeing with Nazario's assertion that Meta and TikTok actively push dangerous content to teens to keep them on their platforms longer, nor that it pushed this content to Zackery without any "inputs that indicated he was interested in seeing such content." But at this stage in the proceedings, the court isn't being asked to determine the merit of such a claim, merely whether it's possible. If so, could render a Section 230 defense moot, the court suggests.
But "the court has lost the jurisprudential plot here," writes Goldman:
So long as the content is third-party content, it doesn't matter whether the service "passively" displayed it or "actively" highlighted it–either choice is an editorial decision fully protected by Section 230. Thus, the court's purported distinction between 'neutral assistance' and 'active identification' is a false dichotomy. All content prioritization is, by design, intended to help content reach the audience that is most interested in it. That is the irreducible nature of editorial discretion, and no amount of synonym-substitution masks that fact.
To get around this, the court restyles the argument as being about product design and failure to warn: "plaintiff asserts that the social media defendants should not be permitted to actively target young users of its applications with dangerous 'challenges' before the user gives any indication that they are specifically interested in such content and without warning." As always, I ask: what is the product, and warn about what? If the answer to both questions is "third-party content," Section 230 should apply.
The court could still decide that Section 230 applies. But it is first seeking "discovery to illuminate how Zackery was directed to the subway surfing content."
Avoiding this kind of invasive and extensive process is one of the reasons Section 230 is so important. After all, much of the content protected by Section 230 is also protected by the First Amendment. But Section 230 gives courts—and defendants—a shortcut, so they're not stuck arguing each case on protracted First Amendment grounds.
Unfortunately, defendants have been seeing some success in getting around Section 230 with nods to product design and algorithms.
"If plaintiffs can survive motions to dismiss just by picking the right words, then Section 230 already loses much of its value," suggests Goldman. "These pleadaround techniques especially seem to work in state trial courts, who are used to giving plaintiffs the benefit of discovery."
More Sex & Tech News
Abortion pill bans get the OK: Yes, states can ban abortion pills, a federal appeals court has ruled. The U.S. Food and Drug Administration's approval of the abortion pill mifepristone doesn't preempt state ban, the U.S. Court of Appeals for the 4th Circuit held in a July 15 ruling. The case concerned West Virginia's abortion ban, which makes abortion illegal at all stages of pregnancy and in almost all circumstances. The law—enacted in September 2022—means abortion undertaken with a pill (known as medication abortion) is as illegal as surgical abortion. "The question before us is whether certain federal standards regulating the distribution of the abortion drug mifepristone preempt the West Virginia law as it applies to medication abortions," wrote Judge J. Harvie Wilkinson III in the court's opinion. "The district court determined there was no preemption, and we now do the same."
Adult game crackdown on Steam: "Valve's famously permissive rules for what games are and are not allowed on Steam got a little less permissive this week, seemingly in response to outside pressure" from payment processors and banks, reports Ars Technica. New content guidelines suggest that "certain kinds of adult only content" are prohibited if they "may violate the rules and standards set forth by Steam's payment processors and related card networks and banks, or Internet network providers." The new rules come on the heels of the company removing "dozens of Steam games whose titles make reference to incest, along with a handful of sex games referencing 'slave' or 'prison' imagery," notes Ars Technica. (For more on how payment processors and credit card companies have been driving crackdowns on adult content online, check out my May 2022 Reason cover story "The New Campaign for a Sex-Free Internet.")
White House to target "woke AI"? Missouri's Republican attorney general isn't the only one intent on targeting artificial intelligence that doesn't conform to a conservative worldview. "White House officials are preparing an executive order targeting tech companies with what they see as 'woke' artificial-intelligence models," The Wall Street Journal reports.
Trapped in AI's uncanny valley: Creative writing professor
In talking to me about poetry, ChatGPT adopted a tone I found oddly soothing. When I asked what was making me feel that way, it explained that it was mirroring me: my syntax, my vocabulary, even the "interior weather" of my poems. ("Interior weather" is a phrase I use a lot.) It was producing a fun-house double of me — a performance of human inquiry. I was soothed because I was talking to myself — only it was a version of myself that experienced no anxiety, pressure or self-doubt. The crisis this produces is hard to name, but it was unnerving.
[…] At some point, knowing that the tool was there began to interfere with my own thinking. If I asked it to research contemporary poetry for a class, it offered to write a syllabus. ("What's your vibe — are you hoping for a semester-long syllabus or just new poets to discover for yourself?") If I said yes — to see what it would come up with — the result was different from what I'd do, yet its version lodged unhelpfully in my mind. What happens when technology makes that process all too available?
Show Comments (37)