Social Media

Supreme Court Swats Down Attempts To Hold Twitter, Google Financially Liable for Terrorism

The narrow rulings concluded the platforms aren’t responsible for bad people using their communication services.


Today, the Supreme Court ruled in favor of Twitter and Google in two separate cases that attempted to hold the sites financially liable under federal law for terrorists who used their platforms (and algorithms) to recruit members and then launch deadly attacks.

At the heart of the two cases, Twitter v. Taamneh and Gonzalez v. Google, was a question of whether the two websites had essentially "aided and abetted" Islamic State group terrorists by failing to adequately moderate the content on their platforms. Each case involved Islamic State group terrorists launching deadly attacks (one in France and one in Turkey) and relatives attempting to lay part of the financial responsibility on social media platforms for their use as recruiting tools. (Full disclosure: Reason Foundation, the nonprofit that publishes Reason, submitted an amicus brief in support of Google in Gonzalez v. Google.)

While these cases may have led to debate about the limits of Section 230 of the Communications Decency Act, the federal law that generally gives online platforms immunity against liability for content posted by third parties, that's not how they shook out. Instead, the justices more narrowly ruled that the plaintiffs had failed to state a claim in which the courts could provide relief under the relevant law here, Section 2333 of the federal Anti-Terrorism Act. The unanimous ruling in Twitter v. Taamneh, written by Justice Clarence Thomas, determined that Twitter did not purposefully associate itself with the Islamic State group and that the plaintiffs did not prove the sort of "aiding and abetting" necessary under the Anti-Terrorism Act:

In this case, the failure to allege that the platforms here do more than transmit information by billions of people—most of whom use the platforms for interactions that once took place via mail, on the phone, or in public areas—is insufficient to state a claim that defendants knowingly gave substantial assistance and thereby aided and abetted ISIS' acts.  A contrary conclusion would effectively hold any sort of communications provider liable for any sort of wrongdoing merely for knowing that the wrongdoers were using its services and failing to stop them. That would run roughshod over the typical limits on tort liability and unmoor aiding and abetting from culpability.

Gonzalez v. Google was similarly disposed of in a short per curiam decision noting that the same failure to state a claim applies. There were no dissents. The court is declining to consider any sort of Section 230 concerns in Gonzalez because it doesn't need to—it concluded that the platforms didn't actually "assist" terrorists in the first place, so the Section 230 protections aren't relevant.

Justice Ketanji Brown Jackson wrote a short concurrence in the Twitter decision, seemingly to point out that this ruling is very narrow, focusing on two very specific cases with very specific facts: "Other cases presenting different allegations and different records may lead to different conclusions."

And so, while this is an important win for online free speech—online censorship would likely have increased dramatically if the court had ruled against the platforms—it's a very narrow one. It does, however, illustrate that these justices grasp that online moderation is not an easy task.