Next month, the Supreme Court will hear Gonzalez v. Google. This lawsuit seeks to strip Google of its protections from civil liability for third-party content under Section 230 of the Communications Decency Act.
The plaintiff is Reynaldo Gonzalez, whose daughter was murdered in a 2015 terrorist attack. Gonzalez argues that YouTube, a Google subsidiary, should face liability because its algorithms recommended terrorist content posted on the platform that Gonzalez says aided the Islamic State. The case asks whether Section 230 applies to content recommended by YouTube's algorithms. Defending itself, the law, and the utility of algorithms online, Google on Thursday submitted a brief to the Supreme Court.
Gonzalez argues that YouTube's recommendation algorithms make YouTube, to some degree, a publisher of third-party content and, thus, unprotected by Section 230. In its brief, Google rejoins that YouTube's recommendations fall squarely within the text and intent of the provision.
"If Section 230 does not apply to how YouTube organizes third-party videos, petitioners and the government have no coherent theory that would save search recommendations and other basic software tools that organize an otherwise unnavigable flood of websites, videos, comments, messages, product listings, files, and other information," Google wrote in its brief.
Virtually any online service that hosts third-party content uses algorithms to filter content to great effect, Google noted. Without algorithmic sorting, its own search engine would be an "unordered, spam-filled" mess. Further, if recommending content vaporizes a platform's Section 230 protections, the company argued, such platforms as Amazon, Lexis (which publishes law review articles), and TripAdvisor (which hosts business reviews) would be legally liable for all their user content as well.
Both Democrats and Republicans would like to functionally repeal Section 230 in order to bring Big Tech under the thumb of Congress. In an op-ed in The Wall Street Journal, President Joe Biden on Wednesday advocated "fundamentally reform[ing] Section 230…which protects tech companies from legal responsibility for content posted on their sites."
However, contrary to the implications of such aggrieved politicians as Biden and Sen. Josh Hawley (R–Mo.), by protecting platforms from liability, Section 230 also protects freedom of speech. Without Section 230, Google, Facebook, and numerous other platforms would have to severely restrict what they allow users to publish. A world in which Facebook could be held liable for user posts is one in which Facebook would not want users posting very much at all. Everything from restaurant reviews to criticism of government officials could be seen by platforms as too legally risky to host.
"Unmeritorious quick removals are common in online copyright law, because the [user-generated content] copyright safe harbor is less favorable to defendants than Section 230 is," wrote Eric Goldman, a law professor at Santa Clara University. "In contrast, Internet services routinely stand up to noncopyright legal threats, legal demands, and cease-and-desist letters targeting UGC—because Section 230 provides them legal certainty at a relatively low cost."
Opponents of Section 230 misunderstand the workings and utility of algorithms and drastically underestimate the work it does to encourage free expression and debate. Without Section 230 protections, many platforms would be forced to either permit all content (to avoid any appearance of behaving like a publisher) or enforce draconian content-moderation policies to ensure no objectionable content can be used against them in court. In short, some platforms would become hateful, filthy hellscapes, and others might be too boring to retain users.
Google's brief crystalized it best: "This Court should decline to adopt novel and untested theories that risk transforming today's internet into a forced choice between overly curated mainstream sites or fringe sites flooded with objectionable content."