First Amendment

Netflix Says Algorithm Is Protected by First Amendment in 13 Reasons Why Suicide Lawsuit

Is a required content warning or algorithm change a violation of the First Amendment?

|

Netflix is being sued by the grieving father of a teenager who allege that the Netflix show 13 Reasons Why inspired their daughter's suicide. Netflix counters that the suit infringes on its freedom of speech, arguing that its algorithmic content recommendation is protected by the First Amendment.

The streaming service has filed a motion to strike down the lawsuit by invoking California's anti-SLAPP statute, which permits the dismissal of complaints that infringe upon protected speech. The lawsuit, Herndon v. Netflix, was filed in California's Santa Clara County Superior Court earlier this year.

The plaintiffs in the case are John Herndon, his sons, and the estate of Herndon's daughter, Isabella, whose suicide was allegedly precipitated by watching 13 Reasons Why. They allege that vulnerable viewers were not adequately shielded from or warned about the show's highly graphic and suggestive content. The lawsuit points to "Netflix's failure to adequately warn of its Show's, i.e. its product's, dangerous features."

Netflix argues that such accusations endanger expressive rights. "Creators obliged to shield certain viewers from expressive works depicting suicide would inevitably censor themselves to avoid the threat of liability [which] would dampen the vigor and limit the variety of public debate," Netflix states in its legal filing.

Ari Cohn, free speech counsel at TechFreedom, says that legal precedent seems to be on Netflix's side. "For ages, plaintiffs have attempted to sue over harms allegedly caused by movies, books, newspaper or magazine articles, television shows, video games, and other media—but they nearly always fail," he explains. "In case after case, courts have held that the ideas, words, and depictions within a product are not subject to products liability law."

Netflix also argues that content-based restrictions would expose an enormous amount of creative works to the risk of censorship, pointing to the trope of teen suicide that stretches from Romeo and Juliet to Dead Poets Society to 13 Reasons Why.

"The point that Netflix is making is a good one, and it isn't limited to suicides," says Eric Goldman, a law professor at Santa Clara University and co-director of the High Tech Law Institute. "In fact, one could imagine any number of different vulnerabilities that members of an audience have," he says. "Keep going down the list, and you could imagine having thousands of warnings. This has no natural limit to it."

The lawsuit against Netflix also argues that the algorithm the company uses to recommend shows and movies is at fault for encouraging suicide by foisting 13 Reasons Why upon suggestive teens.

The company should be held liable, the lawsuit maintains, because of "Netflix's use of its trove of individualized data about its users to specifically target vulnerable children and manipulate them into watching content that was deeply harmful for them—despite dire warning about the likely and foreseeable consequences to such children."

Netflix argues that its algorithm is no different from any other editorial choice and is therefore protected by the First Amendment. Highlighting specific shows for Netflix users is no different than "the guidebook writer's judgement about which attractions to mention and how to display them, and Matt Drudge's judgments about which stories to link and how prominently to feature them," the company argues.

Netflix's "recommendations fall within the well-recognized right to exercise 'editorial control and judgement,'" the company states. "The fact that the recommendations 'may be produced algorithmically' makes no difference to the analysis. After all, the algorithms themselves were written by human beings."

Free speech lawyer Cohn agrees. "While the Supreme Court has not squarely decided the matter, trial and appellate courts have held that computer code is 'speech' for First Amendment purposes, because the code is simply another language chosen to express ideas," he says. "In Netflix's case, their algorithm is plainly expressive: They serve to communicate to viewers that they might enjoy certain content based on their activity and preferences. Such expression is protected if automated just as it would be if done manually."

13 Reasons Why has been a big hit for Netflix since it premiered in 2017. The four seasons of the series depict the lead-up and fall-out of a high school student's suicide through a box of cassette tapes that chronicle her reasons for taking her life.

Cited in the Herndon family's lawsuit against Netflix is a 2019 study from the Journal of the American Academy of Child and Adolescent Psychiatry titled "Association Between the Release of Netflix's 13 Reasons Why and Suicide Rates in the United States." The study claims to have identified an unprecedented 28.9 percent jump in suicides among American teens in the month following the show's premiere. After the study made headlines, Reason Senior Editor Robby Soave questioned its findings. "The study is bunk," Soave reported. "It does not even begin to demonstrate that 13 Reasons Why is the cause of the phenomenon the researchers are documenting….Researchers have no proof that the teenagers who committed suicide over the observed time period had watched the show, or that they heard about the show, or that their deaths had anything to do with the show—this is all purely theoretical."

Court hearings in Herndon v. Netflix are scheduled for November 16. Interested parties on both sides of the debate over technology and free speech will be watching closely.

"We live in an era of fear of technology," notes law professor Goldman. "This [lawsuit] is just another manifestation of this very long arc of the 'computers as killers' narrative."