Supreme Court To Hear 2 Cases About Social Media Moderation and Liability for Terrorism
Does Section 230 shield YouTube from lawsuits about recommendations? Can Twitter be forced to pay damages over the terrorists it hasn’t banned?
The Supreme Court is back in session this morning and has agreed to hear nine new cases, two of which relate to the extent that online platforms can be held liable for terrorist recruitment efforts.
One of the cases will directly address the extent of Section 230 protections of the Communications Decency Act of 1996. In Gonzalez v. Google, Reynaldo Gonzalez sued the company under Section 2333 of the federal Anti-Terrorism Act, claiming that YouTube algorithms helped the Islamic State group radicalize and recruit terrorists through videos and that this led to the death of his daughter, Nohemi, in an ISIS attack on a Parisian bistro in 2015. Gonzalez argues that Google (which owns YouTube) could be held liable for damages under the act.
Under Section 230, Google is generally not legally liable for the content that third parties post online through their platforms. Lower courts have ruled against Gonzalez thus far. But the question presented by Gonzalez is whether Section 230 protects Google when YouTube's algorithmic program makes "targeted recommendations of information provided by another information content provider." In other words, when YouTube recommends videos to people who use the platform without these users actively searching for them, can Google then become liable for the content of said videos?
In the second case, Twitter v. Taamneh, the Court will consider under the same section of the Anti-Terrorism Act whether Twitter can be found to be aiding and abetting terrorists (and, as with the last case, be held liable for damages in civil court) because its service is used by terrorists, even though Twitter forbids such use and actively removes accounts of terrorists when they're found. The plaintiffs in this case are relatives of Nawras Alassaf, who was killed in a terrorist attack by an ISIS member at a nightclub in Istanbul, Turkey, in 2017. According to Twitter's petition, the terrorist responsible for the attack wasn't even using its service. The plaintiffs insist that because other terrorists have been found to be using Twitter, it can nevertheless be held liable for not taking enough "proactive" action to stop terrorists from accessing the platform.
Twitter v. Taamneh is not a Section 230–related case on the surface—though Twitter did try unsuccessfully to invoke its protections.
The U.S. 9th Circuit Court of Appeals ruled in January on both the Google and the Twitter cases in the same decision. The court decided that Google was shielded by Section 230 but that Twitter could be held financially liable for terrorists using its platform, even if the actual terrorist involved wasn't using Twitter. This decision linked the two cases together, which can explain why the Court took up both of them.
Some media coverage of the Supreme Court granting the cases today notes that Justice Clarence Thomas has been vocal that the Court will eventually have to examine the power Big Tech companies have over who has access to their platforms and whether Section 230 has any limits to what content/users companies may remove.
It doesn't seem like these are necessarily the types of cases Thomas meant, though the rulings may still be significant. Thomas and many other conservatives are worried about the power Twitter, Facebook, and other social media companies have to "deplatform" users based on their viewpoints. But in both of these cases, the two companies face lawsuits because they apparently didn't censor enough; both of these cases are about who Google and Twitter didn't deplatform.
Section 230 helps protect online speech. If Google loses its case, the probable outcome will be a significant reduction of video recommendations on YouTube, making it harder for people to find videos related to what they're viewing and harder for content providers to reach viewers. If Twitter loses its case, it will most likely result in even more account bans at even the slightest suggestion of anything violent or inappropriate.
Show Comments (66)