Supreme Court Swats Down Attempts To Hold Twitter, Google Financially Liable for Terrorism
The narrow rulings concluded the platforms aren’t responsible for bad people using their communication services.

Today, the Supreme Court ruled in favor of Twitter and Google in two separate cases that attempted to hold the sites financially liable under federal law for terrorists who used their platforms (and algorithms) to recruit members and then launch deadly attacks.
At the heart of the two cases, Twitter v. Taamneh and Gonzalez v. Google, was a question of whether the two websites had essentially "aided and abetted" Islamic State group terrorists by failing to adequately moderate the content on their platforms. Each case involved Islamic State group terrorists launching deadly attacks (one in France and one in Turkey) and relatives attempting to lay part of the financial responsibility on social media platforms for their use as recruiting tools. (Full disclosure: Reason Foundation, the nonprofit that publishes Reason, submitted an amicus brief in support of Google in Gonzalez v. Google.)
While these cases may have led to debate about the limits of Section 230 of the Communications Decency Act, the federal law that generally gives online platforms immunity against liability for content posted by third parties, that's not how they shook out. Instead, the justices more narrowly ruled that the plaintiffs had failed to state a claim in which the courts could provide relief under the relevant law here, Section 2333 of the federal Anti-Terrorism Act. The unanimous ruling in Twitter v. Taamneh, written by Justice Clarence Thomas, determined that Twitter did not purposefully associate itself with the Islamic State group and that the plaintiffs did not prove the sort of "aiding and abetting" necessary under the Anti-Terrorism Act:
In this case, the failure to allege that the platforms here do more than transmit information by billions of people—most of whom use the platforms for interactions that once took place via mail, on the phone, or in public areas—is insufficient to state a claim that defendants knowingly gave substantial assistance and thereby aided and abetted ISIS' acts. A contrary conclusion would effectively hold any sort of communications provider liable for any sort of wrongdoing merely for knowing that the wrongdoers were using its services and failing to stop them. That would run roughshod over the typical limits on tort liability and unmoor aiding and abetting from culpability.
Gonzalez v. Google was similarly disposed of in a short per curiam decision noting that the same failure to state a claim applies. There were no dissents. The court is declining to consider any sort of Section 230 concerns in Gonzalez because it doesn't need to—it concluded that the platforms didn't actually "assist" terrorists in the first place, so the Section 230 protections aren't relevant.
Justice Ketanji Brown Jackson wrote a short concurrence in the Twitter decision, seemingly to point out that this ruling is very narrow, focusing on two very specific cases with very specific facts: "Other cases presenting different allegations and different records may lead to different conclusions."
And so, while this is an important win for online free speech—online censorship would likely have increased dramatically if the court had ruled against the platforms—it's a very narrow one. It does, however, illustrate that these justices grasp that online moderation is not an easy task.
Editor's Note: As of February 29, 2024, commenting privileges on reason.com posts are limited to Reason Plus subscribers. Past commenters are grandfathered in for a temporary period. Subscribe here to preserve your ability to comment. Your Reason Plus subscription also gives you an ad-free version of reason.com, along with full access to the digital edition and archives of Reason magazine. We request that comments be civil and on-topic. We do not moderate or assume any responsibility for comments, which are owned by the readers who post them. Comments do not represent the views of reason.com or Reason Foundation. We reserve the right to delete any comment and ban commenters for any reason at any time. Comments may only be edited within 5 minutes of posting. Report abuses.
Please
to post comments
Easily start receiving more than $600 every single day from home in your part time. i made $18781 from this job in my spare time afte my college. easy to do job and its regular income are awesome. no skills needed to do this job all you need to know is how to copy and paste stuff online. join this today by follow details on this page.
.
.
Apply Now Here———————————————->>> https://Www.Coins71.Com
I’m making $90 an hour working from home. I never imagined that it was honest to goodness yet my closest companion is earning sixteen thousand US dollars a month by working on the connection, that was truly astounding for me, she prescribed for me to attempt it simply
Everybody must try this job now by just using this website
….................>>>> http://Newsalary510.blogspot.Com
I am making a good salary from home $6580-$7065/week , which is amazing under a year ago I was jobless in a horrible economy. I thank God every day I was blessed with these instructions and now it’s my duty to pay it forward and share it with Everyone,
🙂 AND GOOD LUCK.:)
Here is I started.……......>> http://WWW.RICHEPAY.COM
Cash generating easy and fast method to work part time and earn an extra $15,000 or even more than this online. By working in my spare time I made $17990 in my previous month and I am very happy now because of this job. you can try this now by following
the details here...... http://Www.Smartjob1.com
There should be no duty to moderate anything aside from kiddie porn and true criminal threats.
Are there other types of content that Internet platforms should be required to remove?
There should be neither a duty to moderate, nor a duty to not-moderate.
Section 230 remains ‘entirely correct’ insofar as it should be near-impossible to sue an open-to-the-public website or internet service for the actions of it’s users. Especially when the users are non-paying & there is no contractual relationship between them and the company being sued.
Kudos! Agreed! In spades!
And, except for the type of thing Ejercito lists (i.e. kiddie porn), the general expectation should be that a private, advertising-supported social media corporation will moderate its content in a fashion that pleases it’s stockholders and advertisers.
As a separate issue, we should have laws that constrain government employees from meddling in private social media sites’ content. The focus should be on constraining the government, though, not on constraining or blaming the private company.
(Which isn't to say nobody should ever criticize a private company or its management, as people have criticized Elon Musk for his handling of online free speech. Go ahead and criticize, just don't make laws requiring him to run Twitter the way you want it run. You didn't build that. and you don't own it.)
The touhger problem to solve is constraining elected officials from pressuring social media sites. It’s hard to do so without violating the politician’s right of free speech.
I AM Making a Good Salary from Home $6580-$7065/week , which is amazing, under a year ago I was jobless in a horrible economy. I thank God every day I was blessed with these instructions and now it's my duty to pay it forward and share it with Everyone. go to home media tech tab for more detail reinforce your heart ......
SITE. ——>>> DOLLARPAY.COM
Is your home a brothel in Nevada?
I have made $18625 last month by w0rking 0nline from home in my part time only. Everybody can now get this j0b and start making dollars 0nline just by follow details here..
🙂 AND GOOD LUCK.:)
HERE====)> https://www.apprichs.com
I am wary of laws that purport to constrain the legal ability of social media companies to moderate content.
A law advertised as a way to make sure the "free speech wing of the free speech party" lives up to its name may very well be used to require an online discussion forum on a Holocaust education web site to host content denying the Holocaust or defaming the Judenvolk.
Not following. I am referring to legislation to generically constrain Federal employees and contractors from meddling in social media content moderation.
The Republicans had a bill out there, but I haven’t heard much about it lately.
Those days are long gone. These providers freely choose to suppress voices they disagree with and elevate those they agree with, that their agreement aligns with ISIS and pedophiles says something about them.
https://twitter.com/kylenabecker/status/1659396027500331008?t=5krdJPCnQG05rJlQCOF7oQ&s=19
BREAKING.
New Senate bill proposes new federal agency to police digital media.
It would have the authority to fine users for "misinformation" and "hate speech."
The Commission would have “a broad mandate to promote the public interest, with specific directives to protect consumers, promote competition, and assure the fairness and safety of algorithms on digital platforms, among other areas. To fulfill its mandate, the Commission would have the authority to promulgate rules, impose civil penalties, hold hearings, conduct investigations, and support research. It could also designate ‘systemically important digital platforms’ subject to additional oversight, regulation, and merger review.”
As @ReclaimtheNetHQ noted on Twitter, the bill would “empower a new federal agency to create a council that establishes ‘enforceable behavioral codes’ on social media platforms and AI. The council will include ‘disinformation’ experts.”
“The bill also has age verification requirements,” Reclaim the Net added.
“This is unconstitutional, also evil and stupid,” Constitutional attorney @pnjaban bluntly remarked.
The bill currently lacks specific safeguards to protect free speech and ensure that regulations implemented by the commission do not unduly infringe upon individuals’ constitutional rights. Instead, it relies upon government-appointed “experts” who would doubtless act to police state-approved narratives and policies. Without robust protections for free expression, there is a risk of chilling effects on online discourse, as well as stifling innovation and creativity.
[Link]
I am becoming increasingly dismayed by the trend of using the civil tort system to attack people and entities in furtherance of social agendas that do not even pretend to allege damages by the defendants. Had Google or Twitter actually “aided and abetted” terrorists in the commission of crimes, they should have been investigated, charged and tried for those crimes. It’s almost impossible to understand how Google or Twitter could be held liable for damages for something that was not illegal, and for which they had no duty to plaintiffs to prevent! At the very least the judge in these cases should have tossed them out of court with prejudice and, possibly, fined the plaintiffs for wasting the court’s time. It’s also very hard to sympathize with the grieving families when they sue deep-pockets for millions of dollars “just to make a point.” I'd like to see them sue the actual terrorists; at least that would be entertaining.
They ruled on standing, not on the merits...
No, they ruled on the merits in these cases.A 'standing' ruling would put forward that the plaintiffs did not incur an actual injury & thus were not entitled to sue.
A finding of 'failure to prove' indicates that while the plaintiffs injury is genuine, they failed to show that the entity they were suing engaged in sufficient actions to violate the law as-claimed.
/sigh
Justices did not make a determination on Section 230 in Gonzalez v. Google, a case surrounding allegations that YouTube was liable for recommending videos promoting violent militant Islam.
.
The court was able to sidestep the thorny legal question through a related case involving similar allegations against Twitter in Taamneh v. Twitter. In that case, the court ruled unanimously that such allegations against Big Tech companies could not be brought in the first place under a federal law called the Anti-Terrorism Act.
https://www.washingtonexaminer.com/policy/courts/supreme-court-leaves-section-230-in-place-in-google-decision
Literally a standing issue.
The District Court found against the plaintiff for failure to state a claim, the 9th Circuit found against the plaintiff on §230 grounds and on direct liability grounds, and the Supreme Court held that between the Twitter decision and the 9th Circuit decision, the plaintiff could fuck off.
Had the decision been that the plaintiff lacked standing, the SC would have said so. It didn't. A defence under §230 does not mean the plaintiff lacks standing, only that the defendant has a defence to the claim.
You continue to not understand clear language shrike.
So, the Supreme Court is now MAGA/ISIS.
Ultra-ISIS.
They District Court had dismissed this case for failure to state a claim but the nutty 9th (61% reversal rate) that seems to be staffed by former Reason writers, reversed and the S. Ct. had to step in and swat down the 9th again. Justice Thomas did warn media though, '....To be sure, we cannot rule out the possibility that some set of allegations involving aid to a known terrorist group would justify holding a secondary defendant liable for all of the group’s actions or perhaps some definable subset of terrorist acts. ...' and then proceeded to eviscerate the 9th Circuits 'logic'. This case probably wouldn't make the cut as a 'final' test question for 1st year law students or those studying for the Bar Exam.
I AM Making a Good Salary from Home $6580-$7065/week , which is amazing, under a year ago I was jobless in a horrible economy. I thank God every day I was blessed with these instructions and now it's my duty to pay it forward and share it with Everyone. go to home media tech tab for more detail reinforce your heart ......
SITE. ——>>> DOLLARPAY.COM
Nice how you slipped in a little jab at Reason, just to signal clique membership.
Hey, Scott-
This is all nice and well and good ... anybody gonna do something on FISA Court abuse and the Durham Report?
How about the evils of female genital mutilation?
Prominent Foe of Female Genital Mutilation Wins Prestigious Templeton Prize
Well, unlikely, since it conflicts with pushing child genital mutilation.
Worse yet, she's a Muslim. They get high priority on the victim totem.
They posted an article yestreday on the Durham report. Well worth the read overall (not ENB links one, CJs standalone article).
It's a shame that the big question raised by Gonzales wasn't resolved: namely, whether algorithms that promote or recommend content to users are covered or not covered by Section 230.
My personal opinion, even though I am a big fan of Section 230, is that by any honest reading of 230 it does not cover such algorithms. I don't like it, but I just don't see where Section 230 covers that area.
What was the vote? Unanimous? Was there a majority and minority opinion? If so what justices where in the minority? If so who wrote the minority opinion? Reason tells us almost nothing. For a political reason? Useless Reason.
Gonzalez was unanimous, or at least there was no dissent. Not being a lawyer I had to look up what “per curiam” means in: “Gonzalez v. Google was similarly disposed of in a short per curiam decision…”
For the Twitter case, which the blog post linked to, Thomas wrote the opinion of the court and there were no dissents. Shackford implied this when he wrote, “was similarly disposed of”.
Maybe next time look up words you don’t know before accusing Reason of bad writing.
All of these social media lawsuits rely on an underlying assumption that readers are too stupid to sort out truth from propaganda. That is just plain offensive to my fragile libertarian mind. But I fear that they may be right.
Seems the plaintiffs could bring suit against paper and pencil makers also.