The Volokh Conspiracy
Mostly law professors | Sometimes contrarian | Often libertarian | Always independent
Rep. Devin Nunes's $250M Lawsuit Against Twitter Will Go Nowhere
The defamation (and negligence) claims against Twiter are blocked by 47 U.S.C. § 230.
You can read the Complaint, and see a story about it here. Nunes is accusing political consultant Liz Mair and some unknown commenters of libeling him—a fact-intensive question on which I have no opinion—but is also suing Twitter for "negligence," which in this context seems to mean negligently failing to stop people from using Twitter to libel him:
As the private operator of a public square, Twitter owed Nunes a duty to exercise ordinary and reasonable care in the operation of its platform, so as not to cause harm to Nunes. Twitter breached its duty of reasonable care. Twitter used its platform and allowed its platform to be used by others as a means to defame Nunes. Twitter failed to take action to enforce its Terms and Rules in the face of known abusive behavior and failed to reasonably monitor and police the platform to ensure that rampant abuse and defamation was not occurring.
This failure to prevent him from being defamed, he says, caused $250 million of actual damages to him.
But any such state negligence law claim is preempted by 47 U.S.C. § 230, the federal statute that immunizes online service providers from liability for things that their users post, however defamatory those things might be. That's true whether the claim is brought as a defamation claim or as a negligence claim; service providers don't have a duty "to reasonably monitor and police the platform" (which is why, for instance, I don't have a duty to reasonably monitor and police the comments here).
Nunes argues that Twitter is discriminating in various ways against conservative speakers; but that is irrelevant to a § 230 defense. The statute was passed precisely to make clear that online service providers are immune from liability for others' speech even when they make editing choices about which speech to allow:
Congress enacted § 230 to remove the disincentives to self-regulation created by the Stratton Oakmont, Inc. v. Prodigy Servs. Co. decision. Under that court's holding, computer service providers who regulated the dissemination of offensive material on their services risked subjecting themselves to liability, because such regulation cast the service provider in the role of a publisher. Fearing that the specter of liability would therefore deter service providers from blocking and screening offensive material, Congress enacted § 230's broad immunity "to remove disincentives for the development and utilization of blocking and filtering technologies that empower parents to restrict their children's access to objectionable or inappropriate online material." 47 U.S.C. § 230(b)(4). In line with this purpose, § 230 forbids the imposition of publisher liability on a service provider for the exercise of its editorial and self-regulatory functions.
And later cases have made clear that § 230's preemption of "publisher liability" extends to supposed negligent failure to police:
The decisions construing 47 U.S.C. § 230 have declined invitations to exempt the "negligent publishing" of offensive or unlawful content from the protections afforded by 47 U.S.C. § 230. For example, in Dart v. Craigslist, Inc., 665 F.Supp.2d 961, 967-68 (N.D.Ill.2009), the plaintiff, who served as the Sheriff of Cook County, sued Craigslist on the basis of allegations that the website's adult section constituted a public nuisance. After noting that "Sheriff Dart's complaint could be construed to allege 'negligent publishing,'" the district court rejected any contention that negligence sufficed to overcome the immunity granted by 47 U.S.C. § 230, noting that "[a] claim against an online service provider for negligently publishing harmful information created by its users treats the defendant as the 'publisher' of that information." As a result, the reported decisions construing 47 U.S.C. § 230 have treated the relevant statutory language as creating a broad exemption from liability even when the substantive facts underlying a plaintiff's claim are compelling. See, e.g., M.A., 809 F.Supp.2d 1041 (holding that immunity was available pursuant to 47 U.S.C. § 230 despite the fact that a minor was subjected to sex trafficking as the result of ads placed on defendant's website) and Barnes, 570 F.3d at 1098 (holding that immunity was available pursuant to 47 U.S.C. § 230 based upon a website's failure to remove defamatory postings despite the fact that the "case stems from a dangerous, cruel, and highly indecent use of the internet for the apparent purpose of revenge").
Section 230 does not extend, of course, to people's dissemination of their own speech (which is why the case against Mair and the other individual defendants isn't preempted). And it doesn't extend to platforms' creation and development of tortious speech (for instance, if a platform expressly invites users to post commercial ads that indicate discriminatory preferences, by asking them to fill in special fields designed expressly to indicate such preferences). But Twitter simply provides a way for people to post whatever they want; and, again, its choice to exclude some material based on political viewpoint or anything else doesn't make them a creator or developer of the material that they do allow.
I've occasionally heard arguments that Twitter ought to be regulated as a sort of public utility or common carrier, so that speech on Twitter is protected against restriction by Twitt. Whether Congress could constitutionally impose such a restriction on Twitter or other such platforms is an interesting question. But Congress hasn't done so; quite the contrary: It has provided Twitter and similar services with specific immunity even when they regulate speech on their services.
Nunes's negligence/defamation claim against Twitter, then, is a sure loser; more shortly on Nunes's "insulting words" claim.
Editor's Note: We invite comments and request that they be civil and on-topic. We do not moderate or assume any responsibility for comments, which are owned by the readers who post them. Comments do not represent the views of Reason.com or Reason Foundation. We reserve the right to delete any comment for any reason at any time. Comments may only be edited within 5 minutes of posting. Report abuses.
Please
to post comments
Looking that up, I find the relevant language:
"(1) Treatment of publisher or speaker
No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.
(2) Civil liability No provider or user of an interactive computer service shall be held liable on account of?
(A) any action voluntarily taken in good faith to restrict access to or availability of material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable, whether or not such material is constitutionally protected; or
(B) any action taken to enable or make available to information content providers or others the technical means to restrict access to material described in paragraph (1).[1]"
I assume that Nunes' lawsuit is premised on the claim that Twitter's actions aren't in good faith.
This doesn't appear to matter insofar as the claim is that Twitter failed to remove content provided by others, as the protection there isn't premised on good faith. But he might have a handle to go after them in some manner on the basis of bad faith removals.
The deeper issue, of course, is that something has to be done about all this horrible defamation of such a good man as Devin Nunes. On this deeper level, beyond the technicalities, perhaps there is solace to be taken from the ruling of the trial court in our nation's leading criminal "parody" case, that "neither good faith nor truth is a defense." See the documentation at:
https://raphaelgolbtrial.wordpress.com/
And compare the Second Circuit's ruling that it's okay to deceitfully send out emails in the name of another, as long as one does this with the intent to convey an "idea," or as long as the emails are "puerile" enough; but that it's not okay to do this with the intent to cause even truthful "damage to a reputation." Clearly the assault against Nunes is aimed at damaging his reputation rather than at conveying an "idea," and clearly the "tweets" in question are not sufficiently "puerile" to merit constitutional protection. What we really need to do is find a good legal pretext to criminalize this stuff, just as was done in the above-linked case, so hopefully this lawsuit is just the beginning.
I'm not in any way claiming that Nunes is a good guy.
I would, however, stand by the claim that Twitter is not acting in good faith with many of their deplatforming decisions.
Devin Nunes is a man of great honor who has been repeatedly insulted and libeled on Twitter. As everyone knows, even if he had committed plagiarism or some other similar sort of misconduct, it would be a crime to portray him as "confessing" to doing so. There is absolutely no excuse for defaming him with such mockery.
What about pointing out plagiarism and "he refuses to confess"?
Hopefully one day we'll be able to criminalize that too, although it has less of a provocative impact and is thus more easily ignored. True, it still does tend to stir up unwanted controversy, which is not a point in its favor; but at least it does so without twisting words and crossing the line into rank criminality as defined in current, constitutionally valid law.
Devin Nunes is a man of great honor who has been repeatedly insulted and libeled on Twitter.
Nunes is an ignorant stupid man who shouldn't be surprised his ideas and positions are insulted and mocked openly and publicly.
Hope you have a good criminal defense attorney, the feebs are on their way.
Such a shocking statement! If this decent supporter of our national leader, a distinguished American, a man almost as good as Eugene himself, deserved any form of condemnation, it could have been presented in an appropriate manner. The tawdry methods used to attack him demonstrate that the attacks are untrue.
That is one perspective.
Another is that Rep. Nunes is a partisan dullard who studied cow-milking in school despite a silver-spooned background; is an aggressively substandard public official; and a serial spouter of dogmatic nonsense. That cow has this guy pegged.
One might be able to get away with spouting such a dastardly "perspective" in the third person, but if someone came onto this forum calling himself "Dev Nunes," and stated some such words as these:
"I must beg you all to bear with me in view of these inappropriate accusations; it is true that I should not have colluded in concealing certain documents from my colleagues, but please understand that my political career was at stake,"
that perpetrator would certainly eventually find himself in chains, and rightfully so, given that his statement would clearly be aimed at damaging a reputation rather than conveying an "idea." Surely it makes no sense in this day and age to argue that highly popular academic officials and Vatican representatives (or priests, or sports stars or singers...) cannot legally be subjected to such "parody," but that the rabble can subject public officials to unlimited degrees of torment.
So true. Twitter is clearly not acting in accordance with a good faith. Like Obama, Twitter is obviously a secret Moslem.
P.s. note that many of the offending tweets even claim to be writings of Devin's mother. This is an act of deceit that draws readers to read the defamatory contents of the tweets. While, as a technical matter, the actual name of Devin's mother doesn't appear to have been used to open the accounts, surely the relevant criminal statutes can be expanded just a little bit to reach this sort of lie?
What about the twitter handle @epsteinsmother?
Man, you sure know how to ruin a party.
Nothing in the petition about good or bad faith. I think the strategy is to get around 47 USC 230(c)(1) by treating Twitter as the defaming party as an information content provider rather than an interactive computer service. You can see their tortured reasoning in footnote 1, and the introduction. Twitter is a content provider, you see, because it (1) censors people; (2) selectively bans conservatives; (3) hosting defamers; (4) ignoring lawful complaints about offensive content; and (5) intentionally refusing to enforce its own rules.
More importantly, the allegation noted by Professor Volokh was negligent conduct. In order to get to bad faith, Nunes is going to have to allege more than negligent conduct.
In any event, I think you're misreading the entire statutory scheme. 47 USC 230(c)(2)(A) was intended to protect Twitter from precisely the sort of thing you're alleging--selective moderation of content. And 47 USC 230(c)(1) makes them a non-publisher--i.e., no defamation liability--regardless of (c)(2)(A).
Reading it, it appears to have been intended to protect Twitter from incomplete or imperfect moderation of content.
Twitter is not treated as the publisher of content others provide, and can't be held liable for a good faith (Those words are right there in the statute.) decision to remove content that's offensive.
The goal here was to avoid discouraging online forums from removing pornography, libel, violent threats, and things of that nature. But the protection was extended only to good faith efforts to do so.
A strong case can be made that Twitter routinely engages in bad faith moderation, deliberately leaving up offensive content that's been flagged if the source is left-wing, and deliberately removing non-offensive content that's right-wing under the pretext that it's offensive.
But, based on the language of the statute, I think Twitter is still safe when refraining from removing things, even things of the sort that it has promised to remove. Where they run into danger is wrongful removals that weren't done in good faith.
"Where they run into danger is wrongful removals that weren't done in good faith."
But Nunes still has to allege bad faith. Wouldn't you agree with me that merely negligent conduct doesn't amount to bad faith?
More fundamentally, how can Twitter have a duty to not negligently police defamation by its users, if Twitter doesn't have a duty to police defamation by its users... at all? Even assuming that Twitter, in bad faith, removes some defamers and not others, it still isn't a publisher (because of (c)(1)), and so can't be liable for defamation for others' content. But it can be held liable (in negligence) for others' content if it removes some defamatory material but not others?
Even more fundamentally, what did you have in mind for them being liable for removing some content but not others? If it removed Nunes's content, what's his cause of action against Twitter? Did he have a contract with them, entitling him to post on Twitter?
"Even assuming that Twitter, in bad faith, removes some defamers and not others,"
The problem would be that they remove non-defamers on the pretext that they're defamers. That's the bad faith in question.
I suppose if somebody's business is impacted by their wrongfully being booted off Twitter, they might have a case to make, and the statute wouldn't protect Twitter because it only protects good faith removal.
Set aside the statute. You don't need immunity if there isn't a valid cause of action. If Twitter maliciously targeted my business and kicked us off Twitter, what would I sue them for? Is it breach of contract? Wouldn't that implicate the Twitter limitation of liability, that users (like Nunes) agreed to?
Look, not that I'm terribly sympathetic to Nunes' action here, but you could argue that kicking somebody off your platform for being an [Insert random awful thing] is accusing them of being an [insert random awful thing], and if you knew they weren't, actionable as defamation.
And, yeah, Twitter, just like a lot of platforms, includes a denial of basically all possible forms of liability in their TOS. Sometimes that sort of thing is enforceable, sometimes it isn't.
Why would Twitter's decision to kick anyone off their platform possibly be defamation? It isn't a published statement about the person, other than they are kicked off the platform. If Twitter publishes "We kicked him off because he is an X" and X is false, they're just a normal alleged defamer, like the other defendant, and 47 USC 230 isn't implicated. But that's not what this lawsuit alleges.
Jesus, Brett, for once in your goddamned life would you read the fucking cases?
There's 20 years of caselaw on Section 230. There are differences in interpretations on the margins and some issues that need to be clarified (I brought a cert petition on one such issue last year in Hassell v. Yelp, which was denied by the Court), but all of the things that you are misreading into the statute have been clarified in the cases. All you have to do is read them:
1. Section 230, while motivated to protect Internet sites that did imperfect filtering from being sued for their imperfect filtering, also protects sites that do no filtering at all.
2. Section 230 applies to negligence actions.
3. There are two immunity provisions in Section 230, and only one of them contains the discussion of good faith filtering anyway.
4. The cases are almost unanimous that you can't sue Twitter in any manner that treats it as a publisher.
You would know this stuff if you would not think that you are a legal expert who can just read the text of the statute and magically know all the answers. If you would, instead, seek out the works of actual legal experts (such as the many law review articles and treatises on Section 230, or the major cases).
I think that given the text of the statute it would be difficult to draw the lines you are attempting to draw. Given that the immunity extends to
the test isn't whether you or I, or the court, or society at large find some particular content objectionable, but whether the provider does. So, if they believe that conservative viewpoints are objectionable and remove tweets on that basis, it's hard to see how that could qualify as bad faith.
It's a valid question to ask whether this is a good situation given that the Internet is today's public square. I'm not sure where I land on that question, it is reasonable to conclude that all viewpoints do have equal access to the Internet, just not to individual platforms. But whatever the answer to that question is, the law today seems to allow Twitter to be as viewpoint discriminatory as they choose.
I'd say that "in good faith" language opens the door to a third party determination of whether they were reasonable to regard particular content as objectionable, rather than just leaving it entirely up to the site to make that determination in a totally arbitrary manner.
Setting aside all the other problems with what you said, "reasonable" and "good faith" are different legal concepts. Someone can act unreasonably but still in good faith.
Correct -- good faith is subjective, whereas reasonability is objective.
It's worth noting that Twitter itself is a bit inconsistent about whether it's an information content provider an interactive computer service. Twitter basically wants all the benefits of each legal framework, and none of their downsides.
Nunes' complaint may be at cross-purposes with itself since he demands censorship of what "defames" him AND condemns censorship of conservatives. But I see method in the madness since "good faith" requirements could be vitiated by censorship politically skewed and since Twitter only admits bad faith by using immunity to hide identities of individual defamers.
I'd suggest that Nunes allege more: In my opinion, Twitter doesn't actually own tweets posted on its site; indeed, if anything, immunities they enjoy admit otherwise. Twitter owns an underlying advertising platform given value by Twitter's allowance of the free communications channels created and owned by its users. I, myself, have considered suing Twitter (since its resident fascists have thrown me off twice).
Since Twitter has become a communications channel not only for private individuals but also for politicians and government agencies, that DOES raise whether Twitter has clothed its business with public interest -- an interest made more obvious when one realizes the underlying communications channels DON'T belong to Twitter but to those creating them. I think Nunes' suit has legs if it develops this way, and in any event, being thrown out may not be that terrible. He IS a Congressman and DOES help write laws. Having the law used against him abusively may be what Nunes needs to get it changed to something else. And, if that's his objective, then I want to know where I can sign up to help him.
Try http://www.iamasocialist.com
It does not. He is asking for equal enforcement of Twitter's own rules. There are accounts on twitter which continue to post derogatory comments about Devin himself.
It is absolutely undeniable that Twitter is censoring with a distinct political bias.
Doing so makes them a publisher because they are now editors of political opinion.
As such they are not no longer considered a tech company.
We need more of these high profile court cases. They are long overdue
Here's a hint: things you read on twitter are not law.
Absolutely! And when Brave Devin finishes bringing down the evil empire that is Twitter he can devote himself to finally determining what happened to that quart of strawberries. That is, if someone is willing to lend him a couple balls.
He should sue for fraud instead. Twitter makes public claims that they will protect users from lies, slander, bullying, and harassment. It explicitly states in its terms of service that such content is not allowed.
Discovery would show plenty of evidence that Twitter is routinely made aware of such content posted against conservatives in general, and Nunes in specific, and chose not to act, in direct violation of its own service agreement with customers.
Twitter terms of service.
I don't see anything about protecting users from lies, slander, bullying, or harassment. I haven't looked very carefully. I did notice that Twitter limited its liability for any conduct or content of any third party on the service that is defamatory or offensive. In light of that, do you think there's still a basis for a fraud claim?
Better yet--make libel a federal crime and have them all arrested, no matter how many thousands are involved. Yesterday we had a discussion of Eugene's view (which I have wholeheartedly endorsed) that there's really nothing wrong with criminalizing libel as long as it's done in the right way; but the matter evoked little interest.
I have some bad news. Your ideas are going to continue to evoke little interest because you're a crazy person.
I would have at ye with my spear for that uncouth remark, which seems to bear on Eugene just as much as it does on me. Eugene's ideas and discursive techniques in defending the constitutionality of criminal libel are what they are, and it's a pity to see them evoking such a small amount of interest in this forum. In particular, he deserves the highest praise for his proficiency in evading certain unpalatable issues, such as the many unfortunate rulings of international human rights courts to the effect that "jail is never an appropriate punishment for libel," or the recent decriminalization of libel in England on grounds of "free expression" and the like (such rubbish!), and so on and so forth.
You're good. I like you.
Thank you, I like you too. But above all, I like Eugene. I regard him as a fine example of an American academic skilled in "hit-and-run" argumentation. Definitely a skill that should be cultivated as much as possible in today's academic environment.
P.s. perhaps, since you doubt my mental stability, you think I'm making this up? You might want to begin with this:
http://tinyurl.com/criminal-libel-standards
and I'd be happy to supply further links to the rich body of material that Eugene has so capably avoided discussing in his various comments on the constitutionality of criminal libel. Far from me to criticize the principled, conservative stance that Eugene represents in this regard, or to suggest that there is anything wrong with delicately failing to mention such material in academic discussions of such matters.
See, e.g., p. 4:
"The three special international mandates for promoting freedom of expression ? the UN
Special Rapporteur, the OSCE Representative on Freedom of the Media and the OAS
Special Rapporteur on Freedom of Expression ? have met each year since 1999 and each
year they have issued a joint Declaration addressing various freedom of expression
issues. In their joint Declarations of November 1999, November 2000 and again in
December 2002, they called on States to repeal their criminal defamation laws. The 2002
statement read:
Criminal defamation is not a justifiable restriction on freedom of expression; all
criminal defamation laws should be abolished and replaced, where necessary, with
appropriate civil defamation laws."
Such rubbish from these "international human rights" people! (And there's of course a good deal of more recent material.) Eugene deserves nothing but praise for failing to mention this nonsense when making his various arguments that criminal libel passes constitutional muster.
I find it very unlikely that such a claim would ultimately succeed.
That said, I want as many potential lawsuits against social media companies as possible to go forward, because I think the process would be quite instructive for society as a whole as we decide what to do about these issues.
I think it would be quite useful to get these executives, in court, on public record, under penalty of perjury, to try and justify some of their rather questionable content moderation decisions, and to try to reconcile their written policies (which imply politically neutral content moderation) with their observed results (which are clearly and obviously biased).
Well can you identify the portions of Twitter's terms of service that you think Twitter is violating?
Twitter terms of service.
The first thing that leaps out is, "We reserve the right to remove Content that violates the User Agreement, including for example, copyright or trademark violations, impersonation, unlawful conduct, or harassment."
Note that they didn't reserve the right to remove content that doesn't violate the user agreement. Further down, they do say,
"We may suspend or terminate your account or cease providing you with all or part of the Services at any time for any or no reason..."
Pretty standard TOS: "We can do anything we damned well please, and you agree you have no recourse." That doesn't necessarily stand up in court.
"Pretty standard TOS: "We can do anything we damned well please, and you agree you have no recourse." That doesn't necessarily stand up in court."
I'd be interested to know what you think "doesn't necessarily stand up in court" means in this context. Do you think a court can order Twitter to reinstate a twitter user? Assume a court dismantled the limitation of liability on public policy grounds, what damages would Nunes still have? As David points out, he doesn't pay to use Twitter. Twitter isn't under any legal obligation to have Nunes on it at all, anymore than you're required to host Nunes's speech. So, assume whatever you need to assume about a court throwing out portions of the agreement on public policy grounds, what is it you think Twitter is liable to Nunes for?
Fraudulent inducement? The express terms of the agreement (regardless of whether they're enforceable) would defeat reasonable reliance. But assume that away, he has no benefit of the bargain damages for fraudulent inducement. What are his out-of-pocket expenses for regular fraud? What are his breach of contract damages?
I assure you parties are free the country over to set forth the terms of their services agreements, including one party's right to terminate the relationship at will.
"Note that they didn't reserve the right to remove content that doesn't violate the user agreement. Further down, they do say,"
So, in other words, they did "reserve the right to remove content that doesn't violate the user agreement." But anyway, what language in the terms of service did they breach? Your best interpretation is that a clause in which Twitter reserves a right to remove content, entitles the user to enforce an imaginary promise by Twitter in which it would host the content no matter what--despite language directly to the contrary. But even if you didn't have the disclaimer, what is there for Nunes to enforce in the Terms of Service? Where in the "Your Rights and Grant of Rights in the Content" section does Twitter promise to not remove a user for any reason at all?
"Your best interpretation is that a clause in which Twitter reserves a right to remove content, entitles the user to enforce an imaginary promise by Twitter in which it would host the content no matter what--despite language directly to the contrary."
They claim the right to remove content that violates the user agreement, and to remove users for any reason whatsoever. Content and users are different things, so long as you remain a user, their terms of service assert that content will be removed only for violations of the TOS.
First, they do reserve the right to remove content separately. See Section 4 ("We may also remove . . . any Content on the Services, suspend or terminate users, and reclaim usernames without liability to you.").
But set that aside. You're in court. You're in front of the judge. He asks you "which of the terms did Twitter breach?" What do you say?
I don't care what the TOS say; Nunes should get treble damages for this egregious fraud. Three times what he paid Twitter for his account.
If a newspaper slanders you, does the fact that you don't subscribe to it mean you cannot collect damages?
No. But why are you changing the topic to slander when the issue I was addressing was fraud?
Now, if you want to discuss slander: did Twitter slander him? Or did random Twitter users slander him?
(This is a set of trick questions: the answer to the first is obviously no, since Twitter didn't say anything about him. But the answer to the second is also obviously no, since all the identified statements are nonactionable opinion or hyperbole.)
One could argue that the damages are losses of in-kind contributions. Twitter's business model is such that although you do not pay them in cash for access to an account, you contribute your labor to them in such a way that they profit from. They provide the platform, you provide the content (which you must invest time and effort in). Your content is used to promote their platform, and to attract other users. Your content and the other users you attract is then monetized, by them, to attract advertisers.
They have offered you a thing (a platform) in exchange for another thing (your content). If the thing they are providing you is materially different than the thing they claimed you were getting (i.e. they promised a neutral platform, but are providing a heavily censored one) then they are guilty of fraud.
Now, I would agree that damages are hard to quantify. If one were to bring a fraud suit against social media companies, an advertiser would be much more suited to do so than a simple user. Or even a company that specific uses FB/Twitter to promote their brand and attract customers.
I fully concede the specifics of this exact case are unfavorable, and that Nunes is unlikely to recover much of anything. Would you concede that in the general sense, Twitter should not be allowed to provide a product that is materially different from the product they promise?
"...Twitter should not be allowed to provide a product that is materially different from the product they promise?"
How is Twitter materially different from the promised product? Where were you promised this product you don't think you are getting?
Yes! One can argue that and let's just hope that the argument succeeds as well as the arguments made by the bravest and smartest and greatest scientific minds of our day in their fantastic effort to expose the global earth believers (aka globetards) and reveal the truth of the flat earth!
Not to be pedantic, but when you're defamed in print by a newspaper to which you don't subscribe, it isn't slander. The correct term is "RICO-Libel-Slander."
Libel be damned; he should be suing for conspiracy to violate his constitutional rights.
RICO them.
Oh, wait; DOJ is still full of flunkies of Emperor Hussein.
Which constitutional rights?
It is the 1.5th Amendment: "Social Media Companies are government actors for purposes of the First Amendment when Conservatives get butthurt."
>Which constitutional rights?
In fairness, what we really need is an actual "network neutrality" bill.
It amuses me how many people work themselves in a lather about completely-imaginary content-based decision making by ISP's, yet completely ignore actual content-based decision making by Twitter, GoDaddy, PayPal, Patreon, Facebook, Google, banks (Operation Chokepoint), etc.
I'm not a net neutrality warrior, but try to imagine some differences between ISPs, and places that merely host content. If Twitter makes content-based decisions about what I can read on their site, I lose access to those things on Twitter. But Twitter can't stop me from accessing content on other people's content-hosting services.
Yes, thank the lord that there are sane and cogent purveyors of truth that the libtards have not yet managed to suppress or jail.
Longtobefree, I recommend you read this slightly tongue-in-ckeek analysis before pontificating about RICO.
For what it's worth I was expecting Nunes' counsel (not be confused with Nunes' Cow) to be a hack out of Liberty U. However, Steven Biss is a Princeton graduate and then U. of Richmond Law (USN #63).
So far the result of this meritless misadventure is more ridicule of Mr. Nunes. The bovine went from 84K followers to 362K in about 24 hours.
Sorry USN-#53 not 63
Hey! Are you suggesting that U of Richmond is incapable of producing hacks? If New York University (#6) can produce hacks, why not University of Richmond?
To the Author
Do the negligence and good faith clauses within Section 230 give Nunes the chance prevailing until discovery? I have this suspicion that discovery is what the real cause of action is here.
Yup. Getting the details of how social networking moderation actually works is the real win here.
Let's force the sausage maker to show everyone their methods, in grizzly detail, and see what that does to the sausage industry.
Nunes may lose the battle, but if he can get these suits to discovery, might very well win the war...
"Let's force the sausage maker to show everyone their methods..."
Yes, I'm sure the millions upon millions of twitter consumers who have never insisted on this degree of transparency are suddenly going to stop using a totally free entertainment service because Devin Nunes alleges he was unfairly treated. He's crazy like a FOX!
If transparency about their moderation processes wouldn't be damaging to Twitter, then why do they so insistently refuse to provide it?
Because their users don't insist on transparency. Why would Twitter provide something that its customers haven't demanded?
I agree that the CDA (47 USC 230) will almost certainly let off Twitter.
The question is whether that is still a good policy. If Twitter were a newspaper, magazine, radio or TV station, it might well be liable (or at least would not enjoy immunity -- Nunes still has to prove his case, of course).
In 1996, it was thought the Internet needed this special immunity or it would not get off the ground. Is that still true today -- does Twitter (and others like Facebook, Instagram, or for that matter, various blogs and sites) need an immunity that CNN, the NY Times and Newsweek lack?
"...does Twitter (and others like Facebook, Instagram, or for that matter, various blogs and sites) need an immunity that CNN, the NY Times and Newsweek lack?"
CNN.com, NYTimes.com, and Newsweek.com do not lack this immunity.
But yes, Twitter needs this immunity to avoid lawsuits like the present one. If an online publisher is liable for the content of users, there won't be online content.
And yet newspapers and other outlets have been around for a very long time without this immunity. I am not at all convinced you are correct.
And perhaps other alternatives can be considered. One idea (which I have not yet worked out the details) is to require an entity like Twitter to divulge its takedown policy. It can be any policy it wants, but it has to be divulged publicly. And if it is shown not to be followed, then it loses the CDA immunity. (The DMCA has something like this for copyright issues.)
Point is, the CDA is not some Constitutional mandate, and can be changed as the internet evolves. That is a debate worth having, regardless of what you think of Nunez.
Newspapers and other (offline) outlets do not host other peoples' content. (There's a limited exception for letters to the editor, which is very limited. They publish few enough of those that they can screen each one before publishing it.)
The model of Twitter is very different. People post stuff without pre-approval. That's a necessary feature of the design of the service; if Twitter had to pre-approve each tweet, well, suffice it to say that it wouldn't work. But if Twitter were liable for every one of the tweets, then Twitter would have to pre-approve each one.
Similarly here; if Reason (or Prof. Volokh) were liable for every comment posted, then we wouldn't be allowed to post comments.
NToJ, ink-on-paper publishers are liable for every bit of the content they publish. And yet there is ink-on-paper content. What are you missing?
Answered 8 million times. Twitter isn't publishing the content; it's hosting the content.
Ink-on-paper publishers don't allow the general public to publish content.
Because the number of things that NYTimes publishes in print in a day is small enough that it isn't cost-prohibitive to review the materials. There are approximately 500 million tweets per day. If Twitter hired people to review them before publishing, there wouldn't be anybody left to grow food or drive ubers.
So Twitter would have to get (much) smaller, opening space for competitors. Why isn't that a good thing?
So, you think it's economically infeasible for Twitter to carefully vet 500 million tweets per day, but repealing 230 will let 500 companies flourish while carefully vetting a million tweets per day each? Sort of a reverse economy of scale thing? Things don't usually work that way.
Not only is his idea economically illiterate in that way, but it also fails to understand network effects. A much smaller Twitter is a much less useful Twitter.
As a tweeter, having to provide content on 500 different sites to reach the same audience is not workable; as a reader, having to go to 500 different sites to access the same content would be even less feasible.
If you're arguing against 47 USC 230(c)(1), there wouldn't be a Twitter at all (nor would there be comments sections on this website, or any other). If you're arguing against 47 USC 230(c)(2), there wouldn't be a Twitter, or comments on this website. Instead, you'd be able to have an online discussion about these issues, but only at places like 8chan.
Bored Lawyer, sure they need it. Their business model is to fill the entire publication space with cost-free content?either stolen from publishers, or provided gratis by subscribers. Then, insofar as possible, to monopolize advertising sales on the basis of their cut-rate giantism, plus monetized subscriber info in proportion. No way that model works if they don't get absolute immunity.
Whether that model works for the nation, or for would-be competitors is a different question. And of course it doesn't work. The platform model assures that every libel, every lie, every conspiracy theory, every bit of scurrilous private malice, and every copyright violation gets published without restraint. As we are seeing, that brings the notion of free speech itself into disrepute, and will eventually erode to nothing the honored place for speech in our constitutional order.
In reaction we see constant calls for government intervention and policing of speech (look at this thread, or any of the others) because to many folks the problem looks too complicated to solve any other way. That's worse than the problem itself. But it isn't complicated. Repeal Section 230.
...he said, in the comments section of a website.
What's your point?
That this comment section wouldn't exist without ? 230.
Perhaps I should alter my question somewhat. I have no interest in protecting Twitter per se. The CDA was not drafted with Twitter in mind. The issue is whether robust use of the Internet could be preserved if the CDA immunity were abolished or restricted in some manner.
Suppose one instituted a takedown regime for defamatory or otherwise objectionable posts. (Same way the DMCA has a takedown regime for copyright infringement issues). What would the internet look like then?
Same question if you conditioned the CDA immunity on disclosure of takedown policies and adherence to same.
There may be other variables. Point is, every one assumes that without the extreme immunity of the CDA, the internet would come crashing down. I am not convinced. (What is the law in Europe, for example. Do web platforms enjoy the same immunity for third party postings? )
Good question about Europe.
But we don't have to guess about what would happen without Section 230. Publishing was conducted for a very long time without anything like Section 230.
What happened then was that private individuals in the publishing business exercised private privileges about what to publish and what to turn down. Everybody knows that. To do that, they had to read everything they published. And if they screwed up and published libel, they stood to pay damages in a civil lawsuit. So they were careful.
On the business side, there was no monopolistic giantism. The market for ad sales was atomized, and distributed across the nation, leading to healthy competition. And publishers learned to compete on the basis of the quality of their content. Copyright was largely respected. Libel wasn't that common. Swill, conspiracy theories, and scurrilous private malice struggled to find publishers.
And best of all, by far, that was all accomplished without delivering to government any political leverage to control speech. People are going to have to think long and hard to find a better model in support of free speech.
Sometimes the latest and greatest can't come up with anything to match a system tried and adjusted over centuries. This is one of those cases.
"But we don't have to guess about what would happen without Section 230. Publishing was conducted for a very long time without anything like Section 230."
We don't have to guess what would happen without tractors. Farming was conducted for a very long time without tractors.
But a 230-less and tractor-less world would be a different one from what we have today. We wouldn't have comments sections like this one, and 90% of us would spend our days plowing behind a mule. That may be your preference, but it's not a popular preference.
I don't like or use facebook or twitter or most of what we call social media, but that puts me in a very tiny minority. Your objections are precisely that there are these large scale media things, but they are large scale only because they are wildly popular. And it's a democracy, so you don't get to impose your preferences on all the people who like those services.
I'm sure Eugene is correct on the law but how can 230 protection extent to Twitter when they vigorously and extensively censor and thereby curate what is posted on Twitter?
So you set up a platform where people can post "anything" and then you delete and censor everything you don't like and just leave up what you've approved, and you have no liability?
I would much prefer that platforms such as Twitter would have 230 protection, if only they actually behaved like an open platform of the type 230 was intended to protect and didn't curate their content so heavily.
230 was not intended to apply only to "open platform[s]".
True, point taken. Rather, 230 was intended to apply to an "interactive computer service" and specifically not an "information content provider."
And if Twitter were authoring any of the content, it would be an information content provider (well, with respect to that content). But it's not, so it's an interactive computer service.
The case law seems clear that Twitter is protected.
And I think the NYT would theoretically be as well in my hypothetical below, no matter how much editorial control and selection they exercise over the content.
Unless, in either case, someone can show solicitation of or active participation in the development of unlawful content.
"To assess whether a defendant is an "information content provider" with respect to the content at issue, reviewing courts generally examine whether the defendant materially contributed to the content's alleged unlawfulness. As the U.S. Court of Appeals for the Sixth Circuit explained, "A material contribution to the alleged illegality of the content does not mean merely taking action that is necessary to the display of allegedly illegal content. Rather, it means being responsible [in whole, or in part] for what makes the displayed content allegedly unlawful." Courts have deemed simply editing allegedly unlawful content for grammar or punctuation insufficient to pierce Section 230's liability shield. Similarly, courts have held that the provision of neutral tools to create or develop content does not transform an entity into an information content provider unprotected by Section 230. On the other hand, the solicitation and active participation in the development of unlawful content makes the liability shield unavailable."
Agreed. These social media sites are clearly NOT neutral or open platforms. They heavily restrict what can and cannot be said in a manner entirely consistent with editorial control. To pretend otherwise is laughable.
smartmuffin: That's the very argument that the Stratton Oakmont case used in denying Prodigy immunity back in 1995:
Section 230 was deliberately designed to reverse that decision, and to leave online services free to restrict any material that they "consider[] to be ... objectionable," without thus losing immunity. If Congress doesn't like it, then it needs to repeal or amend section 230.
Is your analysis really a fair characterization of plaintiff's theory of liability? Reviewing your article, I couldn't finding any mention of "shadow banning." Isn't it possible that the complaint may be claiming that twitter is acting in bad faith and not entitled to "good samaritan" protections by virtue of the specifically alleged bad conduct? Or, maybe restating the same thing, that such bad conduct (distinguishable from honest editorial choices) may make twitter a content provider, not a provider?
What is it about shadow banning that makes you think it is actionable?
I think the complaint alleges that this conduct, together with other alleged intentional bad actions on the part of twitter, shows that twitter is acting as information content provider. Shadow banning works, as claimed in the complaint, to eliminate plaintiff's voice and amplify detractors.
Let me make sure I understand your position. If Conservative-Twitter came out tomorrow, and announced that it would ban content from liberals, are you saying that Conservative-Twitter was now liable as a content provider? Redstate, for instance, bans some comments for being left-leaning. Are you saying Redstate is liable as a content provider for anything I post on Redstate's comment section?
Well, if "Conservative-Twitter" announced that it would ban content from liberals, then this wouldn't be shadow banning. And I don't know anything about Redstate, but if a provider actually were to employ surreptitious policies specifically calculated to harm a particular party and to support those publishing allegedly defamatory content, then they might have a problem. Has there actually been litigation specifically challenging whether a provider's actions were taken in good faith under section 230?
It would be shadow-banning if it didn't tell people when they were banned.
"...but if a provider actually were to employ surreptitious policies specifically calculated to harm a particular party and to support those publishing allegedly defamatory content, then they might have a problem."
Why do you think they would have a problem? Is a host of a message board not allowed to police content on political grounds? Are you saying Stormfront isn't entitled to police comments that are consistent with its political views? Do you know what Reddit is?
Thank you, Professor Volokh.
As a hypothetical, say the New York Times stops its print edition and goes online only, keeps its editorial functions but fires and rehires all of its reporters/content producers as independent contractors, perhaps giving them some kind of revenue share like YouTube does. How close are they to getting section 230 protection? Say they also open up their platform to allow a wider universe of people to post certain content like BuzzFeed does with quizzes and the like. Are they now an interactive computer service?
NYTimes.com is already an interactive computer service.
It is, with respect to its readers who post comments. I think what the hypothetical is getting at is NYT.com escaping liability for its articles.
If that's what he's asking, that won't work, though, no. Whether they're employees or independent contractors, they're still agents of the NYT.
Right, as an entity that actually authors its own material online, NYTimes.com doesn't get any protection. But neither would Twitter for material that Twitter authors.
"Right, as an entity that actually authors its own material online, NYTimes.com doesn't get any protection. But neither would Twitter for material that Twitter authors."
But what if, hypothetically, the NYT decided to stop authoring content. And only posted content authored by others.
But what if, hypothetically, the NYT decided to stop authoring content. And only posted content authored by others.
If the NYT started acting like Twitter, I'd imagine the law would treat them like Twitter.
They don't remotely need to act "like Twitter", they just need to be an interactive computer service.
That is what I'm asking. So what makes them agents exactly? Are YouTube producers agents of YouTube? Only the paid ones? If the NYT posts an article by an unpaid intern- not an agent?
The vast majority of NYT articles are paid for by the NYT, either as salaries for staff reporters or by purchasing the content from free-lancers. That's a vast difference from posting unpaid content on Twitter or the Reason comment pages.
It worth noting that actual quote is "(A) any action voluntarily taken in good faith to restrict access to or availability of material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable, whether or not such material is constitutionally protected"
Normally, "or otherwise [x]" language is interpreted as being limited to the same general category as the enumerated list.
Put differently, if your interpretation was correct, they could have just said '...that the provider or user considers to be objectionable, whether or not such material is constitutionally protected" and skipped everything else.
Your interpretation renders superfluous the "or otherwise objectionable" language. But the contrary reading does not render "obscene, lewd, lascivious, filthy, excessively violent, harassing..." superfluous. Your interpretation is wrong.
Huh?
This statutory formulation is fairly common. "or otherwise [x]" language is normally interpreted as a way to prevent loopholes (for lack of a better word) in the enumerated list. It's not intended to completely render everything else in the list superfluous.
Your interpretation is that "or otherwise objectionable" is limited to things that are in the same category as the preceding list. So limited, "otherwise objectionable" can't relate to things that aren't "obscene, lewd, lascivious, filthy, excessively violent, [or] harassing". But if it was intended to be limited to those categories, there would be no need for "or otherwise objectionable". Why are you not treating "or otherwise objectionable" as merely G in a list of things A-G?
"I'm sure Eugene is correct on the law but how can 230 protection extent to Twitter when they vigorously and extensively censor and thereby curate what is posted on Twitter?"
The entire purpose of 230 is to protect online publishers from liability for their censorship decisions. Redstate shouldn't get sued just because some commenter posts defamatory material on the website, even if Redstate attempts (or doesn't attempt) to police defamatory content. That's the whole point.
I get it. And "Redstate" could even post articles that are defamatory so long as the material came from someone else without their contribution.
Correct. That's the entire point of ? 230.
Why should he worry about what folks on twitter are saying about him when he's paying a lot of money to prove he's an idiot?
Why everyone is confused just join at home online job .This is really good opurtunity for home mom just join this website and Earn money by monthly check .So u cant be miss and join this site as soon as posible .
Here what i am doo ?
??????? http://www.TheproCoin.CoM
Why everyone is confused just join at home online job .This is really good opurtunity for home mom just join this website and Earn money by monthly check .So u cant be miss and join this site as soon as posible .
Here what i am doo ?
??????? http://www.TheproCoin.CoM
Could one of the strict constructionists here please explain where in the Constitution the power to regulate libel is to be found.
You've asked the question before, and it's been answered before. We get it: you like censorship.
I take it, Professor, that you agree that claims against ArmsList and other online gun sellers are immune under the CDA despite plaintiffs' arguments that they aren't trying to hold ArmsList etc. liable for the content placed on the site, but rather that they negligently designed their site in a way that makes illegal transactions more likely. If that argument is accepted, then surely there is a lawsuit against Twitter, Facebook, etc. that they designed their sites in a way that makes illegal (defamatory) statements more likely?
http://reason.com/volokh/2018/.....dermines-w
Right, and see also this post about an earlier case.
I quit working at shoprite and now I make $30h ? $72h?how? I'm working online! My work didn't exactly make me happy so I decided to take a chance? on something new? after 4 years it was so hard to quit my day job but now I couldn't be happier.
Heres what I've been doing? ,,,
CLICK HERE?? http://www.Theprocoin.com
"But Twitter simply provides a way for people to post whatever they want; and, again, its choice to exclude some material based on political viewpoint or anything else doesn't make them a creator or developer of the material that they do allow."
You can post whatever you want, as long as Twitter wants it too.
Sounds like a publisher to me.
It can sound like an eggplant parmigiana to you, if you'd like ? but the law expressly defines it not to be.
Eugene states:
"Section 230 does not extend, of course, to people's dissemination of their own speech (which is why the case against Mair and the other individual defendants isn't preempted)."
But could a services' promotion and demotion on a systemic basis be considered its own speech that could be the subject of a claim despite 230?
I don't believe that such promotion/demotion being twitter's speech is part of Nunes' claim.
I see that 230(c)(2)(A) states:
"any action voluntarily taken in good faith to restrict access to or availability of material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable, whether or not such material is constitutionally protected"
While that may protect some promotion/demotion by services, it would be limited to good faith determinations and categorizations that would seem beyond what twitter may reasonably be found to have been doing in policy/outlook based promotions/demotions.