The Volokh Conspiracy
Mostly law professors | Sometimes contrarian | Often libertarian | Always independent
The "Just Like Everyone Else" Test: Providing Off-the-Shelf Services Isn't Tortious Aiding & Abetting
Today's Twitter v. Taamneh, Inc. involved (to oversimplify slightly) a lawsuit against Twitter based on Twitter's alleged role in helping ISIS by providing it publishing services, and by algorithmically recommending some of ISIS's videos. The lawsuit was brought under the federal Antiterrorism Act, but the Act applied fairly traditional aiding-and-abetting principles, borrowed from the criminal law and tort law. (The tort law and criminal law principles aren't always identical, but they seemed to be treated similarly in this case.)
No liability, the Court held, chiefly because Twitter (and others) merely provided an off-the-shelf service, which treated ISIS no better than any other user. Here's an excerpt, with the references to this arms-length treatment emphasized:
To start, recall the basic ways that defendants as a group allegedly helped ISIS. First, ISIS was active on defendants' social-media platforms, which are generally available to the internet-using public with little to no front-end screening by defendants. In other words, ISIS was able to upload content to the platforms and connect with third parties, just like everyone else.
Second, defendants' recommendation algorithms matched ISIS-related content to users most likely to be interested in that content—again, just like any other content. And, third, defendants allegedly knew that ISIS was uploading this content to such effect, but took insufficient steps to ensure that ISIS supporters and ISIS-related content were removed from their platforms. Notably, plaintiffs never allege that ISIS used defendants' platforms to plan or coordinate the Reina attack; in fact, they do not allege that Masharipov himself ever used Facebook, YouTube, or Twitter.
None of those allegations suggest that defendants culpably "associate[d themselves] with" the Reina attack, "participate[d] in it as something that [they] wishe[d] to bring about," or sought "by [their] action to make it succeed." In part, that is because the only affirmative "conduct" defendants allegedly undertook was creating their platforms and setting up their algorithms to display content relevant to user inputs and user history. Plaintiffs never allege that, after defendants established their platforms, they gave ISIS any special treatment or words of encouragement.
Nor is there reason to think that defendants selected or took any action at all with respect to ISIS' content (except, perhaps, blocking some of it). {Plaintiffs concede that defendants attempted to remove at least some ISIS-sponsored accounts and content after they were brought to their attention.} Indeed, there is not even reason to think that defendants carefully screened any content before allowing users to upload it onto their platforms. If anything, the opposite is true: By plaintiffs' own allegations, these platforms appear to transmit most content without inspecting it.
The mere creation of those platforms, however, is not culpable. To be sure, it might be that bad actors like ISIS are able to use platforms like defendants' for illegal—and sometimes terrible—ends. But the same could be said of cell phones, email, or the internet generally. Yet, we generally do not think that internet or cell service providers incur culpability merely for providing their services to the public writ large. Nor do we think that such providers would normally be described as aiding and abetting, for example, illegal drug deals brokered over cell phones—even if the provider's conference-call or video-call features made the sale easier.
To be sure, plaintiffs assert that defendants' "recommendation" algorithms go beyond passive aid and constitute active, substantial assistance. We disagree. By plaintiffs' own telling, their claim is based on defendants' "provision of the infrastructure which provides material support to ISIS." Viewed properly, defendants' "recommendation" algorithms are merely part of that infrastructure. All the content on their platforms is filtered through these algorithms, which allegedly sort the content by information and inputs provided by users and found in the content itself. As presented here, the algorithms appear agnostic as to the nature of the content, matching any content (including ISIS' content) with any user who is more likely to view that content. The fact that these algorithms matched some ISIS content with some users thus does not convert defendants' passive assistance into active abetting. Once the platform and sorting-tool algorithms were up and running, defendants at most allegedly stood back and watched; they are not alleged to have taken any further action with respect to ISIS.
At bottom, then, the claim here rests less on affirmative misconduct and more on an alleged failure to stop ISIS from using these platforms. But, as noted above, both tort and criminal law have long been leery of imposing aiding-and-abetting liability for mere passive nonfeasance. To show that defendants' failure to stop ISIS from using these platforms is somehow culpable with respect to the Reina attack, a strong showing of assistance and scienter would thus be required. Plaintiffs have not made that showing.
First, the relationship between defendants and the Reina attack is highly attenuated. As noted above, defendants' platforms are global in scale and allow hundreds of millions (or billions) of people to upload vast quantities of information on a daily basis. Yet, there are no allegations that defendants treated ISIS any differently from anyone else. Rather, defendants' relationship with ISIS and its supporters appears to have been the same as their relationship with their billion-plus other users: arm's length, passive, and largely indifferent. And their relationship with the Reina attack is even further removed, given the lack of allegations connecting the Reina attack with ISIS' use of these platforms.
Second, because of the distance between defendants' acts (or failures to act) and the Reina attack, plaintiffs would need some other very good reason to think that defendants were consciously trying to help or otherwise "participate in" the Reina attack. But they have offered no such reason, let alone a good one. Again, plaintiffs point to no act of encouraging, soliciting, or advising the commission of the Reina attack that would normally support an aiding-and-abetting claim. Rather, they essentially portray defendants as bystanders, watching passively as ISIS carried out its nefarious schemes. Such allegations do not state a claim for culpable assistance or participation in the Reina attack.
Because plaintiffs' complaint rests so heavily on defendants' failure to act, their claims might have more purchase if they could identify some independent duty in tort that would have required defendants to remove ISIS' content. But plaintiffs identify no duty that would require defendants or other communication-providing services to terminate customers after discovering that the customers were using the service for illicit ends. {Plaintiffs have not presented any case holding such a company liable for merely failing to block such criminals despite knowing that they used the company's services. Rather, when legislatures have wanted to impose a duty to remove content on these types of entities, they have apparently done so by statute.} To be sure, there may be situations where some such duty exists, and we need not resolve the issue today. Even if there were such a duty here, it would not transform defendants' distant inaction into knowing and substantial assistance that could establish aiding and abetting the Reina attack….
To be sure, we cannot rule out the possibility that some set of allegations involving aid to a known terrorist group would justify holding a secondary defendant liable for all of the group's actions or perhaps some definable subset of terrorist acts. There may be, for example, situations where the provider of routine services does so in an unusual way or provides such dangerous wares that selling those goods to a terrorist group could constitute aiding and abetting a foreseeable terror attack. Cf. Direct Sales Co. v. United States (1943) (registered morphine distributor could be liable as a coconspirator of an illicit operation to which it mailed morphine far in excess of normal amounts). Or, if a platform consciously and selectively chose to promote content provided by a particular terrorist group, perhaps it could be said to have culpably assisted the terrorist group. Cf. Passaic Daily News v. Blair (N.J. 1973) (publishing employment advertisements that discriminate on the basis of sex could aid and abet the discrimination). [The newspaper in that case had itself created separate "male," "female," and "male-female" "help wanted" columns. -EV]
In those cases, the defendants would arguably have offered aid that is more direct, active, and substantial than what we review here; in such cases, plaintiffs might be able to establish liability with a lesser showing of scienter. But we need not consider every iteration on this theme. In this case, it is enough that there is no allegation that the platforms here do more than transmit information by billions of people, most of whom use the platforms for interactions that once took place via mail, on the phone, or in public areas. The fact that some bad actors took advantage of these platforms is insufficient to state a claim that defendants knowingly gave substantial assistance and thereby aided and abetted those wrongdoers' acts. And that is particularly true because a contrary holding would effectively hold any sort of communication provider liable for any sort of wrongdoing merely for knowing that the wrongdoers were using its services and failing to stop them. That conclusion would run roughshod over the typical limits on tort liability and take aiding and abetting far beyond its essential culpability moorings….
The Ninth Circuit thus erred in focusing (as it did) primarily on the value of defendants' platforms to ISIS, rather than whether defendants culpably associated themselves with ISIS' actions. For example, when applying the second factor [from an earlier precedent] (the amount and kind of assistance), the Ninth Circuit should have considered that defendants' platforms and content-sorting algorithms were generally available to the internet-using public. That focus reveals that ISIS' ability to benefit from these platforms was merely incidental to defendants' services and general business models; it was not attributable to any culpable conduct of defendants directed toward ISIS. And, when considering the fourth and fifth factors (the defendants' relationship to ISIS and the defendants' state of mind), the Ninth Circuit should have given much greater weight to defendants' arm's-length relationship with ISIS—which was essentially no different from their relationship with their millions or billions of other users—and their undisputed lack of intent to support ISIS.
Taken as a whole, the Ninth Circuit's analytic approach thus elided the fundamental question of aiding-and-abetting liability: Did defendants consciously, voluntarily, and culpably participate in or support the relevant wrongdoing? As we have explained above, the answer in this case is no. Plaintiffs allege only that defendants supplied generally available virtual platforms that ISIS made use of, and that defendants failed to stop ISIS despite knowing it was using those platforms. Given the lack of nexus between that assistance and the Reina attack, the lack of any defendant intending to assist ISIS, and the lack of any sort of affirmative and culpable misconduct that would aid ISIS, plaintiffs' claims fall far short of plausibly alleging that defendants aided and abetted the Reina attack….
The generic language of the statute, which covers anyone "who aids and abets, by knowingly providing substantial assistance, or who conspires with the person who committed such an act of international terrorism," could have been read more broadly (since the provider of off-the-shelf services may know that the services are substantially helping a criminal, alongside all the other noncriminal users). But the Court made clear that it shouldn't be read that broadly; that seems quite correct to me.
Editor's Note: We invite comments and request that they be civil and on-topic. We do not moderate or assume any responsibility for comments, which are owned by the readers who post them. Comments do not represent the views of Reason.com or Reason Foundation. We reserve the right to delete any comment for any reason at any time. Comments may only be edited within 5 minutes of posting. Report abuses.
Please
to post comments
I was pleased to see the general rule about services not being liable for users' conduct stated without any suggestion that common carrier status or Section 230 mattered.
Funny how deplatforming Republicans for disagreeing with the left is fine, relentlessly pursued and free speech but promoting ISIS and pedophiles is totally outside their control. Revealed preferences trumped up arguments of legal fiction.
Deplatofrming ISIS is legally fine absent a contract forbidding them to do so.
As a matter of ethics, there is quite a difference between a Holocaust education web site hosting an online discussion forum and the "free speech wing of the free speech partyy".
You mean the knaves, charlatans, rascals, rogues, scoundrels, and other assorted liars who were trying to convince the country that the 2020 election was stolen? Because IIRC those were the lion’s share of the people who got deplatformed.
Hiow does their treatment compare to the treatment of the "knaves, charlatans and other assorted liars " who pushed the notion that Trump colluded with the Russians®™ to steal the 2016 election?
"BUT WHAT ABOOOOOOUUUUUUUTTTTTTT!!!!!!"
They were treated differently because their situations were not the same, and if you absolutely, positively must what about, can you please at least try to find something that's actually on point?
Suppose Trump had "colluded" with the Russians. That wouldn't have made it a stolen election; that would have made it an election in which the voters were deceived but nevertheless actually did vote for Trump. An election won by lies, and an election won by stuffed ballot boxes, are not the same thing. So right off the bat you're comparing apples to baseballs. (I guess they're both round.)
Also, there was no storming of the Capitol by people who thought Trump colluded with the Russians. That was entirely the doing of people who thought the 2020 election was stolen. So the two aren't alike in terms of the bad consequences they produced. You're now comparing apples to the ball that drops in Times Square on New Years Eve; I guess they're both round.
I'd guess your brain is round and about the size of a pea.
The knaves, charlatans and other assorted liars like you who pushed the notion that Trump colluded with the Russians to steal the 2016 election didn't have to protest anything since that bogus narrative was widely promoted by virtually the entirety of the media.
This is of course the exact opposite of the way that apparent election fraud in 2020 was treated.
I never said or believed Trump colluded with the Russians so cut the “like you” crap. And the reason the media didn’t take Trumps 2020 stolen election narrative seriously is because the only people who took that narrative seriously were people with pea sized brains. It’s now come out that Trump himself didn’t believe it.
There was, of course, no "apparent election fraud in 2020." Nobody has ever found any. In private, all the people telling gullible loons like you that there was fraud were admitting that there wasn't any.
I noticed the same thing.
The ruling holding that those who offer off-the-shelf goods and services would be liable in tort for misconduct by customers would, in effect, require sellers to follow their own personal biases.
See for example, a law holding gun sellers liable to crimes committed by the firearms that they sell, even if the purchaser passes all required state and federal background checks. Such a law would forbid sellers from selling guns if by their own judgment, the purchaser is likely to use it in a crime, and if they have to ujse their own judgment, that would be affected by personal bias.
While the law may sometimes tolerate the proprietors of public establishmentsto act in accordance wit their personal biases, it should not require them to do so.
And what if the purchaser says he plans on shooting up a school tomorrow? Still of the same opinion?
I think you're having a bit of trouble understanding that didn't happen in this case?
“Plaintiffs also allege that defendants have known that ISIS has used their platforms for years.”
“Next, plaintiffs have satisfied Halberstam’s first two elements by alleging both that ISIS committed a wrong and that defendants knew they were playing some sort of role in ISIS’ enterprise.”
Nope. The Complaint’s allegations, which the Court accepted as true, are that Twitter knew its platform was being used by ISIS to foment terrorism or at least recruit terrorists.
Selective quotation is selective! You’d fail a 1L class with that analysis. From the opinion and ~23 seconds of searching:
and
Dude, the issue I am addressing is knowledge. The Court said knowledge plus aid is not enough. That's the whole point of the discussion. Your quote only reinforces this point.
And yet you keep trying to posit hypos that assume direct knowledge of explicit acts by identifiable individual humans, and treat them as exactly the same as this case. You keep asking "but what about this oversimplified hypo that is not this case?", as if you can't tell the difference.
You can read the Complaint as part of the appendix, here:
https://www.supremecourt.gov/DocketPDF/21/21-1496/247606/20221129134106982_21-1496%20ja.pdf
At Appendix pp. 88-91, there are many allegations that it was widely reported in major news outlets for about 5 years that ISIS uses Twitter, and in fact it's a major tool for ISIS. And, there were Congressional hearings and statements by then-Secty. of State Hillary Clinton. It's too long to copy here, but here is one allegation:
On August 21, 2014, after ISIS tweeted out the graphic video showing the beheading of American James Foley, the Wall Street Journal warned that Twitter could no longer afford to be the “Wild West” of social media.
The notion that Twitter's management was unaware of ISIS's use of Twitter is laughable. Certainly, on a motion to dismiss, one must assume they were (as the Court indeed did).
To be clear, in your hypothetical, the shopkeeper wasn't just generally aware that the purchaser had a criminal history, he was told of specific and immediate criminal plans involving the product being sold at that moment.
The principle you seem to be suggesting here is that any organization generally known to have a criminal history would be subject to a mandatory boycott by all businesses. I think that goes quite a bit past current law, and could be seriously abused.
The point of my hypothetical was to show that even in such an extreme case, under this new decision, there is no aiding and abetting liability.
And the facts here are more than what you write. It's not just that there is a criminal organization, it's that there is widespread publicity that the criminal organization has been using this businesses' services to supports its criminal activities. That's a step beyond.
It shows nothing of the sort, though, and the reason you had to add direct personal knowledge of a specific threatened act is that you know that.
The reason the defendants got off is that they run a largely automated service, and provide that service WITHOUT the sort of personal interaction you posit. You added in your hypothetical the exact element whose absence immunized them!
No. Read the case. The court might have said what you said, but it didn't. It said the aider and abettor has to do some act to associate himself/itself with the criminal.
So, you're confirming you know it didn't happen in this case?
What are you talking about? The statute says knowingly aiding and abetting. Plaintiffs plausibly alleged that Twitter knew its platform was being used ISIS. The Court accepted that (as it should have).
Was there a letter from the head of ISIS to the Chairman of Twitter saying, "Dear Jack Dorsey, thanks for your platform, it's been such a help in my efforts to establish a world-wide caliphate, and kill as many infidels as I can along the way?" No, there was no such letter.
Well, thanks for finally conceding that the key element of you hypothetical, that the customer personally informed a human salesman of his criminal intent, wasn't actually present.
I can't speak for Michael but my answer is "yes".
You might be able to justify a mandatory-reporter law for threats to commit violent crimes but no further. Retailers are not trained police and have neither the skills nor the obligations to evaluate whether the alleged threats are credible or whether they rise to a level that justifies discrimination against the speaker. That's a job for the police.
Furthermore, your hypothetical purchaser has already confessed to being ready to commit murder. It would not be unreasonable to worry that the purchaser might become enraged and commit violence because of your refusal.
We apparently can't even compel our paid police to put themselves in harm's way. There is no legal or moral basis to compel a retailer to take greater risks than society requires of our police.
Remember the movie Borat when he wanted a gun to kill Jews?
How does SCOTUS know that these folk were treated "just like everyone else ?"
There seems to be quite a lot of evidence that Twitter treats people differently according to what they have to say - how does one construct a model of how "everyone else" is treated from that ? Other than that everyone is treated arbitrarily ?
They had to go by the record established at the trial court level.
For the most part, they just went by the facts as alleged by the plaintiffs. The plaintiffs repeatedly said that ISIS was treated just like everyone else. In fact, the core of their complaint was that they thought ISIS should have been treated differently and weren't.
But a bank that did that would be in trouble.
These cases were decided on motions to dismiss; there was no record established at the trial court level. It was only the allegations by the plaintiffs that the court was going by.
Seems like too narrow a reading of aiding and abetting. I always thought, if you know you will be helping a crime, that's enough.
Scenario. Fred operates a hardware store. He has an inventory of 20 crowbars he sells. A customer comes in, says he wants a crowbar so he can rob some houses tonight. Fred sells him a crowbar off the shelf, same as he would anyone else. Aiding and abetting liability?
I always thought yes. Now, it seems, no.
I think generally if the customer directly communicates to a human being that the purpose is criminal, you're on the hook regardless of what you sell. But did that happen here? Apparently not. They just provided a lawful, virtually entirely automated service, and ISIS used it. Might as well go after the mall for letting a terrorist ride the human conveyor belt to an attack.
AFAIK Twitter didn't know that it was aiding any crimes. Who told it that?
In the same scenario, I would have said the answer should be "no" - and now it more clearly is "no".
I've had concerns since that law was passed that "aiding and abetting" was too vague, overbroad and ripe for abuse. I'd rather see the law repealed but getting it narrowed is a step in the right direction.
A lot of law needs to be narrowed. For instance, the way in conspiracy law that, once the FBI's paid informant says you're party to a conspiracy, perfectly lawful acts can become 'predicate acts' if they can be construed to advance that conspiracy.
1. Under federal law, you cannot be convicted of conspiracy for "agreeing" to commit a crime with a government agent.
2. Most federal conspiracy laws don't require evidence of any overt acts.
Other than that, great point!
Ah, I see, relying on a technicality: There has to be at least one other member of the supposed conspiracy who isn't working for the government.
When (if) that ever happens, bring it up to the S.Ct. You may get a different result, because your hypo is a different case than this one.
FFS, it's 9-0 with Thomas writing the unanimous opinion. Take a step back from knee-jerk "ISIS bad, but Google/Twitter also bad, I hate'em both so just come up with $hit that suits your biases" pseudo-lawyering.
But what if the customer goes through the automated checkout lane and the sale is never actually seen by any human employee? Does Bob have a duty to flag all crowbar sales, just in case it's this guy?
Ding ding ding, we have a winner!
Well played, DavyC.
Much better analogy than Bored Lawyer's — but it still gives too much credit to the plaintiffs. The crooks weren't buying crowbars that they then used to break into people's houses with. The crooks were buying laundry detergent, which they could use to clean themselves up and make themselves more presentable, so that they'd be less likely to be stopped by the police while on their way to the hardware store to buy crowbars.
Okay, my analogy is getting silly, but the point is that the the plaintiffs' allegations here are just that ISIS generally benefitted as an organization from their use of YouTube/Twitter/etc. — not that they used YouTube/Twitter to commit any actual crime, let alone the specific crimes that victimized the plaintiffs in these lawsuits.
As the Court noted, there's no evidence that the actual perpetrators of these crimes even saw any ISIS videos, let alone that the videos inspired them to commit their attacks.
Think of a power company. The law says that they have to provide power to anyone in their territory that wants it. They aren't allowed to exclude criminals or terrorists, even assuming they knew about it.
The point is that power is an aid, but there's not enough nexus between the aid and the bad act. Ditto for Youtube's case.
Yep. "The power company knows it's supplying electrons to Fred" does not equal "The power company knows Fred has an illegal cannabis grow room in his basement".
In this analogy, Plaintiffs want the power company to be liable for their kid smoking the demon weed, because they didn't scan everyone's power usage and detect/shut down Fred's illegal grow room. Alleging that the power company knew Fred was a customer - just like everyone else - is not enough.
Yet the power company can report suspicious use patterns.
Of course the power company could. And it's not liable in tort if it doesn't. Really, that's not a hard concept to grok.
"No liability, the Court held, chiefly because Twitter (and others) merely provided an off-the-shelf service, which treated ISIS no better than any other user. "
I look forward to the lawsuits against social media companies for supporting Antifa terrorism.
Because those companies do NOT treat Antifa "like they treat everyone else".
In 2016 the social media companies were more or less honest actors, letting everyone speak.
They've ended that, and therefore should be liable for all the speech they do allow
Note: There's no such thing as "hate speech", there's just speech that you hate.
And if "hate" is to be banned, then "hte" directed at TERFs, "Trumpists", evangelical Christians, etc must ALL be banned, too
Otherwise you're just a publisher, chosing what you will publish, and what you won't
And publishers are liable for what tehy publish
Not online, they're not.
Section 230 has a carveout for all federal criminal laws. The law that was being sued under allows civil damages for violations of federal criminal laws, as do many such laws. So in that context, Section 230 is not a help.
That's not what the 9th Circuit thinks (and the S.Ct. didn't reach the issue in the per curium order in the Google case):
(p. 32, and citing additional cases in 1st and 2nd Circuits)
https://www.govinfo.gov/content/pkg/USCOURTS-ca9-18-17192/pdf/USCOURTS-ca9-18-17192-0.pdf
Of course, this was one of the cases that right wingers wanted to use to attack § 230; the Court ended up never even addressing the statute because it found that the plaintiffs didn't even state claims against which these websites needed immunity.
A locksmith merely provides off -the-shelf services to anyone who pays.
1. Does a locksmith have no responsibility to ensure that those requesting lock-opening services have a legitimate right to entrer?
2. If a state wants to impose such a responsibility on locksmiths, must it say so directly rather than using a general aiding and abetting statute?