The Volokh Conspiracy
Mostly law professors | Sometimes contrarian | Often libertarian | Always independent
47 U.S.C. § 230 and the Publisher/Distributor/Platform Distinction
I'm still doing some research related to President Trump's "Preventing Online Censorship" draft Executive Order, and hope to post more about this today. But for now, I wanted to post some background I put together earlier about 47 U.S.C. § 230 (enacted 1996), the statute that is so important to the order; I hope people find this helpful.
Section 230 makes Internet platforms and other Internet speakers immune from liability for material that's posted by others Congress enacted 47 U.S.C. § 230 (with some exceptions). That means, for instance, that
- I'm immune from liability for what is said in our comments.
- A newspaper is immune from liability for its comments.
- Yelp and similar sites are immune from liability for business reviews that users post.
- Twitter, Facebook, and YouTube (which is owned by Google) are immune from liability for what their users post.
- Google is generally immune from liability for its search engine results.
And that's true whether or not the Internet platform or speaker chooses to block or remove certain third-party materials. I don't lose my immunity just because I occasionally delete some comments (e.g., ones that contain vulgar personal insults); Yelp doesn't lose its because it sometimes deletes comments that appear to have come from non-customers; the other entities are likewise allowed to engage in such selection and still retain immunity. Section 230 has recently become controversial, and I want to step back a bit from the current debates to explain where it fits within the traditions of American law (and especially American libel law).
Historically, American law has divided operators of communications systems into three categories.
- Publishers, such as newspapers, magazines, and broadcast stations, which themselves print or broadcast material submitted by others (or by their own employees).
- Distributors, such as bookstores, newsstands, and libraries, which distribute copies that have been printed by others. Property owners on whose property people might post things —such as bars on whose restroom walls people scrawl "For a good time, call __"—are treated similarly to distributors.
- Platforms, such as telephone companies, cities on whose sidewalks people might demonstrate, or broadcasters running candidate ads that they are required to carry.
And each category had its own liability rules:
- Publishers were basically liable for material they republished the same way they were liable for their own speech. A newspaper could be sued for libel in a letter to the editor, for instance. In practice, there was some difference between liability for third parties' speech and for the company's own, especially after the Supreme Court required a showing of negligence for many libel cases (and knowledge of falsehood for some); a newspaper would be more likely to have the culpable mental state for the words of its own employees. But, still, publishers were pretty broadly liable, and had to be careful in choosing what to publish. See Restatement (Second) of Torts § 578.
- Distributors were liable on what we might today call a "notice-and-takedown" model. A bookstore, for instance, wasn't expected to have vetted every book on its shelves, the way that a newspaper was expected to vet the letters it published. But once it learned that a specific book included some specific likely libelous material, it could be liable if it didn't remove the book from the shelves. See Restatement (Second) of Torts § 581; Janklow v. Viking Press (S.D. 1985).
- Platforms weren't liable at all. For instance, even if a phone company learned that an answering machine had a libelous outgoing message (see Anderson v. N.Y. Telephone Co. (N.Y. 1974)), and did nothing to cancel the owner's phone service, it couldn't be sued for libel. Restatement (Second) of Torts § 612. Likewise, a city couldn't be liable for defamatory material on signs that someone carried on city sidewalks (even though a bar could be liable once it learned of libelous material on its walls), and a broadcaster couldn't be liable for defamatory material in a candidate ad.
Categorical immunity for platforms was thus well-known to American law; and indeed New York's high court adopted it in 1999 for e-mail systems, even apart from § 230. See Lunney v. Prodigy Servs. (N.Y. 1999).
But the general pre-§ 230 tradition was that platforms were entities that didn't screen the material posted on them, and indeed were generally (except in Lunney) legally forbidden from screening such materials. Phone companies are common carriers. Cities are generally barred by the First Amendment from controlling what demonstrators said. Federal law requires broadcasters to carry candidate ads unedited.
Publishers were free to choose what third-party work to include in their publications, and were fully liable for that work. Distributors were free to choose what third-party work to put on their shelves (or to remove from their shelves), and were immune until they were notified that such work was libelous. Platforms were not free to choose, and therefore were immune, period.
Enter the Internet, in the early 1990s. Users started speaking on online bulletin boards, such as America Online, Compuserve, Prodigy, and the like, and of course started libeling each other. This led to two early decisions: Cubby v. Compuserve, Inc. (S.D.N.Y. 1991), and Stratton Oakmont, Inc. v. Prodigy Services Co. (N.Y. trial ct. 1995).
- Cubby held that Internet Service Providers (such as Compuserve) were entitled to be treated as distributors, not publishers.
- Stratton Oakmont held that only Service providers that exercised no editorial control (such as Compuserve) over publicly posted materials would get distributor treatment, and service providers that exercised some editorial control (such as Prodigy)—for instance, by removing vulgarities—would be treated as publishers.
Neither considered the possibility that an ISP could actually be neither a publisher nor a distributor but a categorically immune platform, perhaps because at the time only entities that had a legal obligation not to edit were treated as platforms. And Stratton Oakmont's conclusion that Prodigy was a publisher because it " actively utilize[ed] technology and manpower to delete notes from its computer bulletin boards on the basis of offensiveness and 'bad taste,'" is inconsistent with the fact that distributors (such as bookstores and libraries) have always had the power to select what to distribute (and what to stop distributing), without losing the limited protection that distributor liability offered.
But whether or not those two decisions were sound under existing legal principles, they gave service providers strong incentive not to restrict speech in their chat rooms and other public-facing portions of their service. If they were to try to block or remove vulgarity, pornography, or even material that they were persuaded was libelous or threatening, they would lose their protection as distributors, and would become potentially strictly liable for material their users posted. At the time, that looked like it would be ruinous for many service providers (perhaps for all but the unimaginably wealthy, will-surely-dominate-forever America Online).
This was also a time when many people were worried about the Internet, chiefly because of porn and its accessibility to children. That led Congress to enact the Communications Decency Act of 1996, which tried to limit online porn; but the Court struck that down in Reno v. ACLU (1997). Part of the Act, though, remained: 47 U.S.C. § 230, which basically immunized all Internet service and content providers platforms from liability for their users' speech—whether or not they blocked or removed certain kinds of speech.
Congress, then, deliberately provided platform immunity to entities that (unlike traditional platforms) could and did select what user content to keep up. It did so precisely to encourage platforms to block or remove certain speech (without requiring them to do so), by removing a disincentive (loss of immunity) that would have otherwise come with such selectivity. And it gave them this flexibility regardless of how the platforms exercised this function.
And Congress deliberately imposed platform liability (categorical immunity) rather than distributor liability (notice-and-takedown immunity). For copyright claims, it retained distributor liability (I oversimplify here), and soon codified it in 17 U.S.C. § 512, the Digital Millennium Copyright Act of 1998: If you notify Google, for instance, that some video posted on YouTube infringes copyright, Google will generally take it down—and if it doesn't, then you could sue Google for copyright infringement. Not so for libel.
So what do we make of this? A few observations:
[1.] Under current law, Twitter, Facebook, and the like are immune as platforms, regardless of whether they edit (including in a politicized way). Like it or not, but this was a deliberate decision by Congress. You might prefer an "if you restrict your users' speech, you become liable for the speech you allow" model. Indeed, that was the model accepted by the court in Stratton Oakmont. But Congress rejected this model, and that rejection stands so long as § 230 remains in its current form. (I'll have more technical statutory details on this in a later post.)
[2.] Section 230 does indeed change traditional legal principles in some measure, but not that much. True, Twitter is immune from liability for its users' posts, and a print newspaper is not immune from liability for letters to the editor. But the closest analogy to Twitter isn't the newspaper (which prints only a few hundred third-party letters-to-the-editor words a day), but either
- the bookstore or library (which houses millions of third-party words, which it can't be expected to screen at the outset) or
- the phone company or e-mail service.
Twitter is like the bookstore or library in that it runs third-party material without a human reading it carefully, and reserves the right to remove some material (just as a bookstore can refuse to sell a particular book, whether because it's vulgar or unreliable or politically offensive or anything else). Twitter is like the phone company or e-mail service in that it handles a vast range of material, much more than even a typical bookstore and library, and generally keeps up virtually all of it (though isn't legally obligated to do so, the way a phone company would). Section 230 is thus a broadening of the platform category, to include entities that might otherwise have been distributors.
[3.] Now of course § 230 could be amended, whether to impose publisher liability (in which case many sites, including ours, would have to regretfully close their comment sections) or distributor notice-and-takedown liability (which would impose a lesser burden, but still create pressure to over-remove material, especially when takedown demands come from wealthy, litigious people or institutions). And it could be amended to impose distributor liability for sites that restrict user speech in some situations and retain platform liability for sites that don't restrict it at all. I hope to blog some more about these options in the coming days. I also hope to blog some more in the coming days with more details about the specific wording of § 230. But for now, I hope this gives a good general perspective on the traditional common-law rules, and the way § 230 has amended those rules.
(Disclosure: In 2012, Google commissioned me to co-write a White Paper arguing for First Amendment protection for search engine results; but this post discusses quite different issues from those in that White Paper. I am writing this post solely in my capacity as an academic and a blogger, and I haven't been commissioned by anyone to do it.)
Editor's Note: We invite comments and request that they be civil and on-topic. We do not moderate or assume any responsibility for comments, which are owned by the readers who post them. Comments do not represent the views of Reason.com or Reason Foundation. We reserve the right to delete any comment for any reason at any time. Comments may only be edited within 5 minutes of posting. Report abuses.
Please
to post comments
When AI was finally perfected the first thing it started doing was libeling humans and then it found porn and spent most of its time watching porn.
https://www.smbc-comics.com/comic/training
Very appropriate for this comment.
" 47 U.S.C. § 230, which basically made all Internet service and content providers platforms as to their users' speech—whether or not they blocked or removed certain kinds of speech."
But is § 230 consistent with the 14th Amendment's "equal protection" and "due process" clauses, and SCOTUS other numerous mandates of "content neutrality" in policing a public (or limited public) forum.
You say platforms -- I say "public forums" -- and the examples you give *are* public forums. The city sidewalks is the classic example of a public forum -- and led to the classic (oft misunderstood) "time, place, & manner" rule.
And I'm rather certain that a TELCO couldn't shut off service for content without approval of a state agency (e.g. PUC) -- they are highly regulated.
HENCE: I see two examples where there is a content neutrality mandate from the 1st & 14th Amendments, and a law that SCOTUS has already tossed a good chunk of, and have to ask if . § 230 is unconstitutional as well.
WHERE does Congress get the power grant content-based censorship rights to public forums if it doesn't itself possess that right in the first place?
Now Congress could declare that Twatter and Farcebook are not public forums, but that's not what § 230 does, and it needs to be remembered that § 230 was intended to accomplish an unconstitutional end so there is at least guilt-by-association here.
I see this as the classic "half pregnant" argument -- it can either be a public forum with the inherent mandate of "content neutrality" or it isn't one -- Congress can't create some hybrid with "free speech for thee but not for me."
Section 230 is a federal law that creates a rule for private companies. How exactly do you suppose it could violate the equal protection or due process clauses of the Fourteenth Amendment, which restrict state governments?
Reverse incorporation.
https://dictionary.thelaw.com/reverse-incorporation/
As your own link explains, the (constitutionally suspect) doctrine of reverse incorporation is based on the Fifth Amendment, not the Fourteenth.
That's the whole idea behind reverse incorporation: in the same way that much of the bill of rights is incorporated against the states by way of the 14th, the right to equal protection of the 14th is incorporated against the federal government through the 5th.
Twitter isn't a state actor so, whether or not it is a public forum is irrelevant for them. Of course, if it is a public forum, the President could not use the power of executive enforcement to force Twitter to remove their own speech labeling content as false of misleading.
Private parties don't need permission from Congress to do anything, including censorship, nor are they affected by Constitutional limitations on government.
well, yeah. but then those private parties better have solid lawyers and terms of agreements or they will be sued into the ground.
Congress didn't say "you are permitted to do 'this'". Congress said, "we will protect you from liability, provided that you stay within define boundaries".
230 seems fine. The problem is politicians are threatening it for not cleaning up hate speech, or blocking politicians saying it is hate speech. The companies are trapped between two sets of politicians doing what they are not supposed to be able to do: threatening free speech in a roundabout way by hurting the companies.
They hold the keys to their jail cell. All they have to do is stop the busy-body, censorious, deleting, deplatforming, and muzzling.
Just as Ma Bell could not refuse you service because it did not like the content of your telephone call, Faceberg, Twitter, Google et al should not be able to refuse you service because your viewpoint conflicts with their profligate progressive totalitarian narrative.
Did you read the post? Their "jail cell" was unlocked by Congress almost 25 years ago.
They were let out on good behavior, and immediately mugged a nun.
So...make your own blog or publish your content elsewhere? No company ought to be obliged to pay for the technical infrastructure for you to spew whatever nonsense you want to the world.
These companies allow free access to their platforms in order to sell advertising and make money. They are getting content generated for free. It is their business model.
"No company ought to be obliged to pay for the technical infrastructure for you to spew whatever nonsense you want to the world."
Perhaps, but your logic would also apply to common carrier regulations of the last century, as Libertymike suggested. Do you also disagree with those regulations?
That is not the source of the political attacks on § 230 that I've seen recently.
Although Joe Biden did advocate repealed § 230 in this New York Times interview in January, his complaint, to the extent in was comprehensible, seemed to be more about "fake news" issues than hate speech.
https://www.nytimes.com/interactive/2020/01/17/opinion/joe-biden-nytimes-interview.html
There is a full on assault from both sides. The left is attacking tech companies for not being censorious enough - they basically want anything contradicting MSDNC narrative blacklisted. The right is attacking tech companies for being as censorious as they are.
The traditional immunity for platforms came with all-comers non-dIscrimination requirements.
What’s novel is giving a platform the immunity of a newspaper without any corresponding duties such as all-comers requirements.
That’s essentially what a title of nobility is. People who get to have control of a critical utility with no corresponding duties, who have both control over peoples’ lives or an important parts thereof, and license to do whatever they want with no liability, forcing people to cater to them and do their will if they want to function in life - such people are called nobles, not citizens.
Traditionally, if you got to keep the tollbooth and control the road, you had duties to allow all travelers who paid the toll, not just your supporters.
But if there are 300 parallel roads all going directly from Town A to Town B, and there's a separate company/person collecting a toll for each road, I don't see a problem with Your Company telling me, "Sorry, but you've been banned from this road, because you keep taking a huge dump in the middle of the road, even after we asked you politely (and then, not politely) to stop deliberately shitting where other people walk. Now, you'll have to get from A to B by using any of the other 299 roads. Or, you'll have to build another road--which will cost you essentially zero dollars in this community--where you may allow defecators to roam freely...in fact, you can probably even ban travelers who refuse to pinch a loaf in the middle of your road."
That seems like an incredibly small burden to place on a traveler. I simply don't see why Twitter can't announce, "We no longer want dicks using our service, and we get to decide on who is or is not a dick." It's not that there is one alternative to Twitter; it's that there are literally thousands of alternatives. I admit that I have little understanding of the legal minutiae on this issue--it looks quite complicated and I have not yet spent the time familiarizing myself. But when I look at the possible consequences, the worst seems to be a future segregation of platforms, so people will go to Twitter for 30 types of posts, and will go to Competitor X for far-right lies, and will go to Competitor Y for far-left lies, and will go to Z for anti-Semetic lies, and so on. Which is what much of the internet already is. I know that I can go to some places for vile opinions and I have the ability and freedom to go or not go to those websites. What's the big deal about doing the same for tweets or for other types of opinion communication?
I don’t use Twitter, but from what I can tell there are NO viable alternatives to Twitter. And Twitter determines whether you live or die. Why do you hate people?
It's more like there's a bridge over the river, (Plenty of roads lead to and from the bridge.) and when somebody tried to build a second bridge, (Gab) their bank canceled their account, their suppliers refused to sell them bricks, the map makers refused to include the new bridge in their road maps, and all in the space of a couple days.
But obviously not an antitrust problem, that's silly.
If someone wants to build a bridge exclusively for assholes, it's maybe unsurprising that no one wants to do business with them.
It's pretty absurd to suggest that there's no alternatives to Twitter. A trivial web search finds articles like this one with plenty of options: https://fossbytes.com/best-twitter-alternatives/
"If someone wants to build a bridge exclusively for assholes..."
You've made two major errors in the space of one phrase:
1) "exclusively:" Gab didn't build its platform and then advertise solely to "assholes." It built a platform around the idea that it wouldn't censor anybody.
If it attracted unpopular users, that's because they had been driven off all other platforms. They had no other recourse (which is another example of monopoly power).
2) "assholes:" that's your opinion. Freedom of speech exists to protect UNPOPULAR OPINIONS, not popular ones (which require no protection).
The rest of your post is an attempt to justify the obvious collusion by other industries to deny a market position to a political viewpoint they didn't like. The fact that so diverse a group seemingly came to the same conclusion in the space of a few days must be the greatest coincidence in legal history.
Your list of "alternatives" insults the intelligence of every poster here.
Alongside freedom of speech is freedom of association. It turns out no one that provided certain types of service wanted to associate with Gab, which is a good sign that my definition of asshole was a pretty common one.
If someone wanted to post like a jackass here, no one's going to stop them, but we are all going to point and laugh at you.
Not quite all - there will certainly be some who read it, nod their head and jump in with "What he said!" comments. We might even be able to guess which ones those might be, depending on what sort of jackassery the newcomer posted.
Not even that -- I keep coming back to the "content neutrality" part of a public forum -- and the SCOTUS law on how all newspapers are permitted to have distribution boxes on city streets.
The city can regulate where they are placed, and even what color they must be painted (some NH city, memory is Portsmouth, wanted them all painted a certain shade of brown to facilitate a downtown beautification program) -- but they couldn't restrict which newspapers were allowed to have them.
"I keep coming back to the “content neutrality” part of a public forum "
So ... you keep coming back to something even though it's been repeatedly pointed out to you that it doesn't apply.
Oh wait, that's right. You just make stuff up.
How about, "I remember in High School, probably in, oh, Lewiston Maine, I met a wrestler and he said that the mighty Androscoggin River was a public forum, just like Twitter, and I know that is true because that high school wrestler was none other than the famous luchadora and Supreme Court justice you guys call Ruth Bader Ginsberg. Anyway, RBG was saying how smart I was, and ..."
Is or is not a city's sidewalks -- the explicit example given by Prof. Volokh -- a public forum?
Does or does not the SCOTUS concept of "content neutrality" apply to speech in said public forum?
As an aside, there are city streets in Lewiston, Maine, and there have been some incidents -- ugly racial incidents -- where this has been relevant. But I digress....
And wouldn't _NY Times v. Sullivan_ apply to the example of the candidate's ad printed in the newspaper? That's not blanket immunity, that's the "absence of malice" standard.
And as to telephones, those were already being regulated by the ICC when the Communications Act of 1934 put them under the newly created FCC. While private, they've essentially always been regulated.
The government has to be content neutral when it regulates speech.
Private parties do not have to be content neutral when they choose what speech to publish or transmit.
* With notably exceptions.
Those exceptions aren't a matter of Constitutional law, though.
Depends. 14th Amendment may play a role.
No it doesn't. The 14th Amendment doesn't play a role with non-state actors (it also doesn't play a role with the Federal government, that's the Fifth Amendment, but I digress).
"Private parties..."
Tech giants are not mom-and-pop operations.
At best, they are in a special category by themselves having been given special immunities that other private parties do not enjoy.
At worst, they can be considered as government-sanctioned enterprises, if not outright monopolies.
Please, don't insult my intelligence by linking to a handful of alternatives that nobody has heard of and that have 1 millionth of the customer base. Just don't.
Can I insult your intelligence by denying its existence?
(Hint: none of the laws we’re discussing give “tech giants” any “special immunities” that unspecified “other parties” do not get.)
I agree that that's what's novel about s.230, but Professor Volokh's argument is that it was precisely the intent of Congress to do exactly that.
How does slashdot.org fit into this? I don't know their exact policy. They don't routinely remove obscene comments. They have a moderation system whereby ordinary users mark comments up or down, and readers can set thresholds of comments to see. Moderators can meta-moderate moderators, which presumably changes the weight of their moderations.
Assuming they don't remove anything except legally required libel, say, what category are they?
Under current law, they're pretty clearly protected from liability for content posted by their users.
I notice you've avoided actually quoting the statute. You're omitting a key aspect of it: It grants that civil immunity, allows the sites to be treated as platforms even if they engage in moderation, but only so long as the moderation is done in good faith.
That's the key point of contention: Are the platforms actually moderating in good faith per the terms of the statutes, or are they engaged in bad faith moderation, and thus lose their protection from lawsuits?
EXACTLY -- and I come back to "content neutrality", which does not include true threats and true libel.
But the "in good faith" was "good faith" in enforcing an unconstitutional law -- at what point is § 230 fatally flawed because of that?
"(2)Civil liability
No provider or user of an interactive computer service shall be held liable on account of—
(A)any action voluntarily taken in good faith to restrict access to or availability of material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable, whether or not such material is constitutionally protected; or
(B)any action taken to enable or make available to information content providers or others the technical means to restrict access to material described in paragraph (1)."
The bolded text describes the sort of content even a traditional platform such as a phone company would be permitted to remove on being notified. So I don't see a problem there.
The real problem is that the "or otherwise objectionable", rather than being interpreted as meaning anything else of the same nature as the listed items, has been treated instead as meaning, "or anything else, really: Just do what you want!"
And this rendered the "in good faith" language void.
Ma Bell could not cancel your service because you employed epithets in your telephone calls.
Actually, they can't do it for making harassing calls, either -- they defer that to the police for a criminal prosecution.
Interesting.
I don't know if they *could* or not, but they don't -- they instead put pen registers on the line and otherwise cooperated with the cops.
I don't think that entirely correct. If someone is using the telephone network for harassing or obscene phone call the Telephone Company has a duty to discontinue their service.
I'm not sure -- I've never seen a line disconnected, and it wouldn't do any good because the perp would just use a *different* line elsewhere else.
It's usually boyfriend/girlfriend stuff, and I've always seen it dealt with via an emergency 209a order that bans *any* phone calls to the person, from any phone.
The obscene phone calls are trickier because once you ID the caller, you're done -- they're using random phones, often behind a PBX, and often calling random women so what phone do you disconnect? And now that they are making VoIP calls, it's worse.
And usually there are other people using the phone for legitimate purposes. If it was just one person with a phone in a cabin twenty miles down a bad dirt road, then disconnecting it would make sense -- but that's not going to be the case.
There might be a "good faith" proviso to the filtering immunity, but the statute also says that "No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider", with no good faith proviso. So I'm not sure what the good faith argument gets you.
They can't be treated as the publisher or speaker, but if their moderation isn't in good faith they can be subject to civil liability for the way they moderate. That's what it gets you.
"rather than being interpreted as meaning anything else of the same nature as the listed items"
Eh. Maybe a good point. But "or otherwise objectionable" seems pretty wide open.
Under your view, YouTube removing a reasoned presentation by Dr. Erickson and the other guy on the harmfulness and uselessness of the COVID lockdowns, just because they contradict the CDC or whatever, would not qualify under this safe harbor.
But what liability would there be on account of that action? Does the action just defeat the entire status as a platform, and now they are a publisher of a billion or whatever tweets daily?
Subsection (1) is the relevant section, not subsection (2). It's entirely possible they can be sued for voluntary actions to restrict material they consider objectionable. However, that doesn't mean they lose their general immunity when it comes to the contents of what third parties write (as outlined in subsection 1).
That's true, but why is anybody even bringing up Subsection (1)? It's not really relevant to the EO.
The executive order specifically says that it would suggest the FCC propose regulations suggesting that subsection 2 plays a role in subsection 1 and that failure to follow subsection 2 means you lose immunity from subsection 1.
"hall file a petition for rulemaking with the Federal Communications Commission (FCC) requesting that the FCC expeditiously propose regulations to clarify:
(i) the interaction between subparagraphs (c)(1) and (c)(2) of section 230, in particular to clarify and determine the circumstances under which a provider of an interactive computer service that restricts access to content in a manner not specifically protected by subparagraph (c)(2)(A) may also not be able to claim protection under subparagraph (c)(1), which merely states that a provider shall not be treated as a publisher or speaker for making third-party content available and does not address the provider’s responsibility for its own editorial decisions;"
"The real problem is that the “or otherwise objectionable”, rather than being interpreted as meaning anything else of the same nature as the listed items, has been treated instead as meaning, “or anything else, really: Just do what you want!”
And this rendered the “in good faith” language void."
In this interpretation, the presence of the second set of requirements negates the first set. But were that true, there would be no point in including the first set at all.
Since the lawmakers took the time to include the first set, the logical conclusion is that the second set CANNOT have the interpretation that it negates the first set. It must mean something else.
Brett, countless people - lawyers - have pointed out how your analysis is completely incorrect by both precedent and any cannon of legislative construction.
And yet you do not respond to them, and continue to attempt to post through it.
"cannon"?
"It grants that civil immunity, allows the sites to be treated as platforms even if they engage in moderation, but only so long as the moderation is done in good faith." Brett
"No provider or user of an interactive computer service shall be held liable on account of:
(A)any action voluntarily taken in good faith to restrict access to or availability of material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable" Section 230 (c) (2)
How is Brett wrong? Good faith is a requirement of the civil immunity section.
No, it isn't Bob.
The CDA isn't really that hard. That particular section has to do with good-faith moderation.
In other words, the concern was that one of two things would happen:
A. You get immunity, but there could be no moderation; or
B. If you try to moderate at all, you would be held liable.
This particular subsection provided that good-faith attempts at moderation will not cause liability; thus providers can moderate, in whole or in part, and still be immune pursuant to the CDA.
Man, I am having Roommates.com flashbacks now. I don't know why Prof. Volokh is bothering to explain any of this when all the morons here (and Dr. Ed & Armchair Lawyer, but I repeat myself) just ignore what he writes.
Seems like you are reading the requirement of "good-faith" out of the statute.
Can you even read? Seriously?
Try reading the statute. The whole thing. See where it says "good faith?"
My god, I am dealing with illiterates.
So, how is Brett wrong? You are copying the text of the statute with the "good faith" language included, but you seem to be excluding it from your analysis.
You are illiterate, aren't you. I just explained it.
I cannot help your complete and total inability to either read or understand, can I?
Is your position that good faith is not required in order to be afforded liability protection?
Yes, it absolutely is.
Since you can't read, I will type more slowly.
THAT SECTION THAT BRETT DOESN'T UNDERSTAND IS ABOUT MODERATION; IT ALLOWS WEBSITES TO MODERATE WITHOUT LOSING THE LIABILITY SHIELD.
This has already been explained.
Well, your reading renders the "good faith" term superfluous. In other words, you are reading it out of the statute.
You are, quite literally, the dumbest person I have met.
Most people I meet at least have the humility to understand that they don't know something.
You take the cake.
//You are, quite literally, the dumbest person I have met.//
Not an argument. Have a drink.
Forget it. People here seem to think that “good faith” means a court can determine whether someone sincerely considers something on their forum is objectionable or whether they were illegally pushing a partisan agenda.
Why is the term in the statute?
It’s there because Prodigy Services was trying to moderate its forum, they missed a comment about Stratton Oakmont, and got sued for publishing a defamatory statement. Congress didn’t want people trying to moderate their forums to be liable for failing to do it “correctly.”
So, your position is that "good faith" is a meaningless term?
I think his position is that "good faith" is a very weak standard, and it will generally be difficult to prove a lack of good faith. But not impossible. I assume Congress wanted most sites to be able to avoid being sued out of existence and it therefore put in weak language specifically in order to dissuade all but the strongest (ie, most egregious) lawsuits.
That's my guess...I am, of course, not really in a position to read the mind of any other poster.
I agree it's a weak standard, and it should generally be difficult to prove bad faith moderation, but the platforms have gotten SO egregiously bad about it that we're not really talking about difficult cases anymore. They literally leave death threats up and then take down a discussion of the 10 commandments because murder is mentioned in one of them.
Good faith is part of subsection (c)(2) while we are talking about subsection (c)(1).
We are talking about (c)(2)(A), which sets forth prerequisites for the avoidance of liability, one of which is "good faith."
No, it doesn't.
Read the damn law.
You are either illiterate, a liar, or both.
//(2)Civil liabilityNo provider or user of an interactive computer service shall be held liable on account of—
(A) any action voluntarily taken in **good faith** to restrict access to or availability of material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable, whether or not such material is constitutionally protected; or//
It is there. I don't understand your position.
That's because you don't understand what the law is.
(c)(1) provides that the ICS (aka, either a service provider or a website) cannot be treated as the speaker or publisher of any information by an ICP.
Since you are, apparently, both a little slow and refuse to actually read what anyone else has written, I'll explain what that means using an example.
Let's say that someone here accuses you of being a brain-dead child molester. Normally, that would be LIBEL. The publisher (where the words were published) would also be liable for the LIBEL; in that case, both the internet commenter, as well as this website!
(c)(1) means ... NO LIABILITY. None. Zero.
But here's the issue. What about moderation? Let's say that a website is trying to keep things calm- that the website, instead of just letting everything be the wild west, instead occasionally comes in and says, "Yeah, you can't post that."
That's (c)(2); this is to avoid liability for issues of control. In other words, you can't assert that just because a website selectively moderates (has some control over messages) that it is the publisher; websites can restrict access WHETHER OR NOT SUCH MATERIAL IS CONSTITUTIONALLY PROTECTED.
I feel dumber for having to explain this. Really.
Perhaps you should read the provision we're talking about, instead of repeating a different provision?
47 U.S.C. § 230(c)(1).
//(c)(1) means … NO LIABILITY. None. Zero.//
That is not what (c)(1) states. This section does not mention liability at all. You are making that up.
Hence, (c)(2), which address liability.
If "good faith" is irrelevant, the burden is on you to explain why it is in the statute.
"That is not what (c)(1) states. This section does not mention liability at all. You are making that up."
It's funny, because you don't realize how stupid you are, do you?
You really just don't get it, do you? And yet ... you also believe that you have a point.
...this is why we can't have nice things.
There are two separate sections. Contours of liability are set forth in (c)(2)(A).
Is the statue conferred blanket protection from liability with (c)(1), (c)(2) is superfluous.
But, if that is the case, why is (c)(2)(A) a provision ... at all?
You are no longer making arguments, just screaming.
Okay. Let's try this out.
Why don't you explain to me exactly what you think (c)(1) means.
I mean, I'm saving this thread, because you're a hoot. But I can't wait. Maybe you'll cite to, oh, I dunno, Zeran (pinpoint would be p. 331, not that it will help you).
C'mon. Explain to me, in your best try, exactly how (c)(1) works.
I know you can do it! I'm getting popcorn!
(c)(1) precludes an ICS from being deemed a publisher or speaker of information provided by another.
(c)(2)(A) grants immunity from liability that may arise from moderation activities - for example, claims that the ICS is violating its TOS - provided that the moderation is done in "good faith."
Now, read everything I just wrote, including the first thing:
"The CDA isn’t really that hard. That particular section has to do with good-faith moderation.
In other words, the concern was that one of two things would happen:
A. You get immunity, but there could be no moderation; or
B. If you try to moderate at all, you would be held liable.
This particular subsection provided that good-faith attempts at moderation will not cause liability; thus providers can moderate, in whole or in part, and still be immune pursuant to the CDA."
It's amazing you finally have grasped the very simple point that we started with.
Now that you've managed to banish the first round of nonsense from your head, let's try the second round.
What do you think that the law is regarding "good faith" for (c)(2), and why do you think Brett has already been schooled on this multiple times?
I still don't understand your reason for ignoring "good faith" in those instances where the ICS is not acting as a publisher.
Again, consider an example of an ICS violating it's own TOS. That can give rise to a breach of contract claim. Liability in this instance is not unconditional. To avoid liability, the moderation must have been in "good faith."
You are, yet again, simply ignoring the language of the statute and re-writing it to grant unconditional immunity, from all liability, in all instances.
That is not what the statute says.
(c)(1) precludes an ICS from being deemed a publisher or speaker of information provided by another.
As Eugene explained, that shields the ICS from liability if they do not moderate.
(c)(2)(A) grants immunity from liability that may arise from moderation activities
It extends the immunity from liability, that (c)(1) gave to ICS's that do not moderate, to ICS's that moderate in good faith.
//that moderate in good faith.//
Correct.
So, what is "good faith"?
It means if they mistakenly do not remove the comment "Guzba is a pedophile," they can't be sued. If on the other hand, they remove "loki is a pedophile" while knowingly and intentionally leaving "Guzba is a pedophile," you can sue them.
//It means if they mistakenly do not remove the comment “Guzba is a pedophile,” they can’t be sued. If on the other hand, they remove “loki is a pedophile” while knowingly and intentionally leaving “Guzba is a pedophile,” you can sue them.//
I think that is a plausible example of acting in bad faith, assuming the decision to remove one and not the other was deliberate (i.e. "We hate Guzba ... let him get defamed.").
I don't see any problems with this.
No, that's not what anyone is doing.
Instead, what you are doing is trying to cover your own ignorance, which has been well-documented by your comments that you cannot edit, by shifting the goalposts now.
It's pretty obvious that you didn't know what you were talking about, and now you're trying to dumb-splain our positions back to us, which is not what we said.
Unfortunately, we all see how stupid you have been; so, good luck with that!
Nobody is shifting any goalposts.
Liability protections are not unconditional for an ICS.
Your irrational tantrums are not an argument.
We're talking about liability as a publisher (which is subsection (1)), not liability for their own direct actions (subsection (2)).
If an ICS cannot be a publisher, why are you discussing publisher liability?
(c)(2)(A) discusses potential liability, as a non-publisher, such as liability that may arise from the ICS violating their own TOS.
Because the liability attaches when they are considered a publisher of the content.
That's the entire purpose behind the law. Just like, traditionally, a newspaper can get sued for what someone writes in the newspaper.
Or a book publisher can get sued for what someone writes in a book.
And so on. That's why (c)(1) (in conjunction with (e)(3), preempting state and local laws) provides immunity from liability.
See, e.g.:
Dowbenko v. Google Inc., 991 F. Supp. 2d 1219, 1220 (S.D. Fla. 2013), aff'd, 582 F. App’x 801 (11th Cir. 2014)
Mezey v. Twitter, Inc., No. 1:18-CV-21069-KMM, 2018 WL 5306769, at *1 (S.D. Fla. July 19, 2018)
Roca Labs, Inc. v. Consumer Opinion Corp., 140 F. Supp. 3d 1311, 1318-1325 (M.D. Fla. 2015)
And it's more.
For example, Grindr is entitled to CDA immunity.
Herrick v. Grindr, LLC, No. 17-CV-932 (VEC), 2017 WL 744605 (S.D.N.Y. Feb. 24, 2017)
Facebook cannot be held liable for its choices on how to display information.
Force v. Facebook, 304 F. Supp. 3d 315, 328 (E.D.N.Y. 2018)
Craigslist could not be held liable to discriminatory housing ads.*
Chicago Lawyers’ Committee for Civil Rights Under Law, Inc. v. Craigslist, Inc., 519 F. 3d 666 (7th Cir. 2008)
*But see Roommates.com
//Because the liability attaches when they are considered a publisher of the content.//
An ICS, by law, cannot be considered a publisher.
(c)Protection for “Good Samaritan” blocking and screening of offensive material
(1)Treatment of publisher or speaker
No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.
loki
There seems to be a gap in your explanations.
If (c)(1) means they have no liability, zero, none, as you say, then (c)(2) would be superfluous.
To oversimplify a bit:
(c)(1) says that they have no liability for not moderating.
(c)(2) says that they have no liability for moderating.
(c)(1) is basically about liability (or, rather, non-liability) to third parties.
(c)(2) is basically about non-liability to users.
Apologists for the tech giants (paid lackeys?) like Loki13 are trying to say that the "good faith" clause IS ABSOLUTELY ESSENTIAL (!)...to a tiny subsection that has no importance.
In other words, they're trying to read the good faith clause out of the law.
No Brett, that's not what it says. It says that they can't be sued as if what was posted by users was posted by them. It ALSO says, separately, that its own choices of what to censor (its own actions, not the actions of others) can't be the basis for liability if it meets those criteria. The first part isn't conditioned on the second part.
" It ALSO says, separately, that its own choices of what to censor (its own actions, not the actions of others) can’t be the basis for liability if it meets those criteria."
Right. Where one of the criteria is that the moderation be "good faith". That's what I'm saying. Their immunity to liability for moderation is contingent on that moderation being done "in good faith."
I've had a chance now to read the draft EO, and Trump is doing exactly what I anticipated: He's directing the FEC to clarify exactly what is meant by "good faith", and he's cutting off platforms that engage in the sort of political discrimination we're discussing from any government money.
Brett thinks if he says 5,000 times this thing that's legally gibberish, it will transmute this subjective term into an objectively enforceable standard.
I wouldn't spend too much time analyzing that Executive Order. It was just red meat to rally the base. That and some federal judge in Guam is probably going to enjoin it with a national injunction this afternoon anyhow with the 9th Circuit affirming later in the week.
Don't worry about the unconstitutional stuff Trump is trying, our courts (that he is filling with his guys) will surely prevent this from happening!
This is an awful argument. Particularly from the right, who thinks each branch should be their own constitutional arbiter.
Unconstitutional crap as red meat is not to be laughed off.
Well you are right that unconstitutional crap isn't funny. Like Obamacare. That really wasn't funny. And really really wasn't funny when the commonly understood unconstitutional stuff later became constitutional when the Supreme Court gave it the thumbs up.
"eally really wasn’t funny"
C'mon, I'm sure Ginsberg et al., got quite a chuckle out of John Roberts.
You are not some on-high arbiter of Constitutionality. I, and the Supreme Court, don't think the ACA is unconstitutional.
You seem to allow that this EO may be unconstitutional, and don't care. Or, rather, before answering the question, you'd like to revector to refight old battles you've lost.
Lame.
Or you can ask the few hundred legal professionals that filed amicus in the Supreme Court who thought Obamacare was unconstitutional, but I digress....
It isn't that I think the EO is unconstitutional (maybe it is, who cares) but it is more likely that a liberal judge in some backwater jurisdiction in the 9th Circuit is going to enjoin it in a national injunction anyhow rendering the entire thing meaningless.
That is other then Trump got his red meat for his base and the left got the red meat for theirs out of the whole political exercise.
They lost. You lost. The law of the land is what it is, it is not what you or I think it ought to be.
maybe it is [unconstitutional], who cares
How very patriotic of you.
Now that I've looked at the draft EO, I don't see any proposed constitutional violations.
1) Clarify what "in good faith means", and what actions are a violation of it. Well, the term IS in the law, shouldn't we know what it means?
2) Stop spending federal money advertising on platforms that violate the free speech of users. Should the government be advertising on platforms that violate free speech rights
3) Violations of TOS by sites will be treated as a deceptive business practice. You put out a TOS, you don't follow it, how is this not deceptive?
Or unconstitutional stuff like trying to ban and confiscate common firearms after EVERY SINGLE public shooting. That really isn't that funny either.
Whattaboutism
It is called pointing out lack of legitimacy.
No, get's pointing out your side's hypocrisy.
GOP or Trump postering bad, Dem or lib postering good.
When you're for a policy, but spend all your time arguing it's required because of a bunch of irrelevant nonsense, that's whattaboutism.
Who cares if the left is hypocrites? Doesn't make Trump any less dumb to do this stuff.
there is nothing dumb on unconstitutional about this.
Twitter and other social media platforms have openly chosen political sides, with clear evidence of shadow banning and forcing deletions of user content solely due to political disagreements. Not harassment, not factual inaccuracies, not vile content....but simply, political disagreements. They are no longer even pretending to be a neutral carrier.
So an EO, or congressional action to solely clarify their "state-provided legal protections" is neither wrong nor unconstitutional.
I strongly support § 230, and the fact that the both candidates seem to want to get rid of it is disheartening. But I think it's hard to argue that it's constitutionally required—much less that the actions contemplated in the EO (proposing a regulation to the FCC, restricting federal spending on advertising, and forming a working group to consider unfair trade practices cases) are themselves unconstitutional, however stupid and pointless they might be.
I'm not arguing it's Constitutionally required; sorry if it comes out like that. I'm arguing
1) the Constitutional 'free speech' arguments against it are bunk
2) it's good policy, in that it's really the only workable policy paradigm.
3) an EO is not going to get you there. Certainly an EO that ignores notice and comment requirements.
I think the middle ground (treating them as a distributor) could work as well since that's the framework for copyright cases. Not saying that I think the copyright framework is a good one (it's far too subject to abuse), but I could see it copied for defamatory statements with relative ease.
I agree with (1), (2a), and (3), though.
Despite the fact that I routinely mock Lathrop for his ridiculous crusade against § 230, I do think there are flaws in it. For example, let's suppose I anonymously put up a page on Facebook saying "Sarcasr0 is a child molester." There's effectively nothing you can do about it legally. You can sue me, and you can get an injunction telling me to take it down, but it's likely to be on default, and of course if you can't find me you can't make me take it down. And you can't force FB to take it down even if you win, because § 230. Maybe FB will choose to remove it, but it faces no liability if it refuses to do so. So you're stuck.
Maybe not best policy, but it has the virtue of simplicity by a mile compared with alternatives I've seen.
Eugene brings up a point whose current applicability is unclear. He speaks of phone companies as being common carriers and, therefore, in the "platform" category. This may still be true of traditional wired phone service, but how many of us have that anymore? Take me. I got rid of my wired phones years ago, but have a broadband service connected to my desktop PC, and a cellular phone which can access the Internet -- neither of which is regulated or price-controlled by any state's PUC. Are these services considered platforms? Would I have any recourse if the phone companies who provide them decided to limit what I can say over their services, or to shut off my service for being a conservative?
The FCC has the ability to declare ISPs Common Carriers and did so then reversed itself.
"Like it or not, but this was a deliberate decision by Congress. "
As a part of a law that was largely struck down.
"The Internet community as a whole objected strongly to the Communications Decency Act, and with EFF's help, the anti-free speech provisions were struck down by the Supreme Court. But thankfully, CDA 230 remains and in the years since has far outshone the rest of the law."
230 would never have passed as a stand alone law in a GOP congress.
Coulda woulda shoulda -- it's still existing law.
Sure but its existence is only "deliberate" in context.
None of that changes what the law is, Bob.
Also, why do you specify GOP Congress? That should doubly not matter. Is your party of choice the only legitimate lawmakers in your eyes?
EV asserts [twice] that the fact that 230 was "deliberate" is a point in its favor. But it wasn't very "deliberate" because it only passed as part of a broader law to support decency.
How does Twitter letting Iranian and ChiCom propaganda pass unfiltered support decency?
Bob, to determine deliberate you look at the 4 corners of the statute. You know the text.
Then you look at Congressional intent, (but hide it if you're Scalia).
You are advocating for some kind of counterfactual analysis that's just outcome-oriented trash.
Your final sentence reveals your actual agenda, and it ain't legal analysis.
Bob's probably the most outcome oriented legal theorist here, and that's saying something.
And he’s still not as bad as RBG.
"230 would never have passed as a stand alone law in a GOP congress."
Never believe Bob's lies.
The CDA was added to the Telecommunications Act by an 81-18 vote in the Senate (1 abstained); notably, of the 18 voting no, there were only two GOP members (McCain, Packwood).
It was overwhelmingly supported by the GOP.
In addition, both the Senate AND the House were GOP-majority when it was passed.
Again, never believe Bob's lies.
He's also wrong about the part about it being struck down, as he doesn't understand the whole law-making part, and what passed, and what the Supreme Court later did. But that's okay. Because, again, never believe Bob's lies.
"both the Senate AND the House were GOP-majority "
Somebody doesn't know that what "congress" means. Sad.
Ugh. Random "that" out.
Should be:
Somebody doesn’t know what “congress” means. Sad.
No, someone does.
But someone doesn't like being exposed as a liar. Do they?
I mean, you're probably used to it by now.
I said "GOP congress" that includes “both the Senate AND the House".
I understand you never write 1 word when you can write 20 but others don't share your long winded style.
Yes, you did Baghdad Bob.
And as I wrote, Congress (all of it, not just a little) was in the GOP's hands at the time. In fact, the entire bill was championed by the GOP with token resistance from the Democrats.
So, tell me what other BS Baghdad Bob is looking to spin?
Wah wah wah. Such a baby you are.
The entirely GOP controlled [happy?] congress passed a law aimed at promoting "decency" in the internet that also included 230. Its right there in the short title, Decency.
Here is the start of the title.
TITLE V--OBSCENITY AND VIOLENCE
SUBTITLE A--OBSCENE, HARASSING, AND WRONGFUL UTILIZATION OF
TELECOMMUNICATIONS FACILITIES
Even a simpleton like yourself can see the aim of the law, combating obscenity and other indecent things. That is why it was "championed by the GOP ".
No "decency" provisions, no CDA, no 230.
What did Stratton Oakmont v Prodigy have to do with “decency?”
Wow!
I love it. Baghdad Bob actually tried to do a little research. It's kind of amazing. I'm touched.
Except, of course, it's Baghdad Bob. "Those aren't American tanks, those are just very clever Iraqi tanks ... that look like American tanks!"
Nice try, Bob, except that 1) the CDA had multiple parts that were enacted separately; 2) we know why 230 was enacted; and 3) all of this was part of a larger bill.
I do love how you just make up stuff though. Wait, what's the expression?
Oh yeah. Bob always lies. That's it!
"CDA was added to the Telecommunications Act by an 81-18 vote"
Yes, the C Decency A was, but the "decency" provisions were struck down. 230 was just along for the ride.
I understand its hard to read when you have so much spittle on your monitor. Maybe buy Windex?
No, not hard; it's just really hard to read your BS when I happen to know this area really well, so I know just how much more you are lying that usual, Bob.
I mean, as I wrote, you always lie; but it's more funny when I can see how desperately you make stuff up in areas you know nothing about. You're just blathering about this when you really don't have clue what's going on!
You're such a .... well, you're such a gullible little liar. Perfect exemplar for the GOP, really. You both believe anything, and also spew it out on command.
That's an odd counterfactual. 230 was added because it made easier for private entities to censor things considered indecent. In other words, it furthered the goals of the statute. If they had known other parts would have been unconstitutional, would they have preferred nothing rather than half a loaf? I don't see why. I don't see anything in that statute intended as some kind of fig leaf to make the other parts (struck down) more palatable. It seems to me that those who wanted an uncensored internet would have opposed 230 just as much as anything else.
"It seems to me that those who wanted an uncensored internet would have opposed 230 just as much as anything else.'
You would be wrong, the suits avoided 230, they just attacked the "decency" provisions.
That's because they're clearly constitutionally enacted.
"I'm still doing some research related to President Trump's "Preventing Online Censorship" draft Executive Order, "
Can we please wait for a final issued order before going all batshit over it?
MS, when the executive wants to do a bad thing, that's when you put them on blast.
Not after they've done the bad thing.
Hold the phone everyone, sarcastro is going to make certain liberty is protected for all of us. That does NOT include caring if Big Tech censors every online platform because that is no concern as those a private companies. But, be on the look out for what the big bad Trump is trying to do....
I'd care about that if it was real.
But it's not.
Don't worry folks, sarcastro is on this one too. Nothing to see here. Online censorship by trillion dollar corporations is just fine because they censor speech that sarcastro thinks has no value. Nothing to see here folks. Move along.
This is a draft. It was leaked.
Who leaked it? Has the administration officially acknowledged it as genuine?
Without provenance what this alleged draft says means nothing.
I'll go batshit prematurely. The part about potentially taking action against Twitter for unfair or deceptive methods of competition is downright scary.
https://roar-assets-auto.rbl.ms/documents/6668/EO%20-%20Preventing%20Online%20Censorship.pdf
I like the part about defunding...
"(in which case many sites, including ours, would have to regretfully close their comment sections)"
Easily remedied by adding a revenue provision to the stripping of 230 protections from large sites.
Sites with less than 5 million [or 10 or 20] in revenue maintain immunity, sites like Facebook or Twitter do not.
Twitter can apparently fact check Trump, it can do so for Chinese propaganda too.
Can it moderate every potential defamatory or otherwise actionable statement? X says Y is a shady businessman who stole from him on Twitter. It gets 10 likes and 3 retweets. Can Y sue Twitter for publishing a defamatory statement?
"Can it moderate every potential defamatory or otherwise actionable statement? "
I don't know. Let's find out.
I know everyone here probably hates legislative history and such, but if you want to understand Section 230, review Stratton Oakmont v. Prodigy Servs. Co., Sup.Ct. INDEX No. 31063/94, 1995 N.Y. Misc. LEXIS 229, at *1 (May 24, 1995). The result in that case is exactly what the statute is trying to prevent: a choice between no moderation whatsoever/not having a mostly open forum or liability stemming from trying to moderate.
I'm fairly familiar with it. It's been 25 years since that decision. The internet has mature significantly within that time frame, as has usage.
When technology matures, sometimes new laws, or revisions to old laws are needed to prevent abuse.
But I think the problems that led to section 230 are amplified. If you treat Twitter as a publisher, there are so many millions or even billions of possible statements that could create liability for them. Should it keep Trump’s tweets about Scarborough maybe murdering a staffer to avoid not being “neutral” or should it delete it to protect them from publishing a defamatory statement?*
*I know it’s highly unlikely that those particular tweets are defamatory, but that’s the calculation they have to make.
I would tend to go the other way. Treat Twitter as a platform, and eliminate (or extremely curtail) its ability to censor. Like phone companies and internet carriers.
Section 230 was written in an era of bulletin boards and moderators, where is was possible for a single spamming troll to effectively shut down a board.
These days, things are different for the large corporations. DJT can spout as many ridiculous tweets as he wants, and people can not listen to him, or ban him, as desired. People are their own moderators over their Twitter feeds. Why should Twitter act as a "super-lord" over what you can and can't see?
Your phone company can't suddenly cancel your service if you start spouting Nazi crap over it. Or if you want to listen to it. But Twitter can? It's not right. Treat Twitter like the Phone and Internet companies, like platforms. Extremely limited in the ability to curtail or censor conversations and accounts.
Many of the folks who participated in the drafting of both the DMCA and CDA are still alive... and some of these folks furrow their brows at analysis of their work. Having said that [grin] --
In times of emergency -- and various governors contend that a dire emergency now exists -- every president since Kennedy has had statutory authority to regulate communications in manners not necessarily contemplated by Congress or the courts. Personally, I believe that the proposed Executive Order will be more of a thorn in the sides of Google/Facebook/Twitter than they might now imagine: if nothing more, the Order will force some [dubious] actions by past presidents (including many Democrats) to the forefront at a time when such past actions might be perceived by moderates as wantonly "anti-American." The result could be positive in that "forgotten history" might be remembered and corrective action might be taken.
In 1996, Congress enacted the Communications Decency Act (“CDA”, 47 U.S.C. § 230, adopted as Title V of the Telecommunications Act of 1996, Pub.L. 104-104, 110 Stat. 56 (1996)). Congress specifically found that “the rapidly developing array of Internet and other interactive computer services available to individual Americans represent an extraordinary advance in the availability of educational and informational resources to our citizens … [t]he Internet and other interactive computer services have flourished, to the benefit of all Americans, with a minimum of government regulation …” 47 U.S.C. § 230(a)(1), (4). In passing the CDA, Congress made its objectives clear: “to promote the continued development of the Internet and other interactive computer services and other interactive media,” and to preserve a “vibrant and competitive free market” for them, “unfettered by Federal or State regulation[.]” Id. (b)(1)(2).
To this end, the CDA mandates that “[n]o provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider,” 47 U.S.C. § 230(c)(1), and “[n]o cause of action may be brought and no liability may be imposed under any State or local law that is inconsistent with this section,” 47 U.S.C. § 230(e)(3) (e.s.). In other words, the CDA “provides absolute” and “complete immunity” to a website “from any action brought against it as a result of the postings of third party users of its website.” See, e.g., Giordano v. Romero, 76 So. 3d 1100, 1101-02 (Fla. 3d DCA 2011) (citing Doe v. Am. Online, Inc., 783 So. 2d 1010 (Fla. 2011)). Indeed, the Florida Supreme Court along with other state courts have held that the CDA preempts any state-law action inconsistent with its terms, including any asserted “liability based upon negligent failure to control the users’ publishing of allegedly illegal postings on the Internet . . .” Doe, 783 So. 2d at 1017.
An express preemption provision like the CDA’s bars any cause of action that is either in “conflict with” the statute or that “stands as an obstacle to the accomplishment and execution of the full purposes and objectives of Congress.” Hillman v. Maretta, 569 U.S. 483, 490-91 (2013) (quoting Hines v. Davidowitz, 312 U.S. 52, 67 (1941)) (internal quotation marks omitted); see also Doe, 783 So. 2d at 1016 (state law is preempted if it “would stand as an obstacle to the accomplishment of the full purposes and objectives of Congress in passing § 230 of the CDA”) (internal quotations and citations omitted). Any action would thus be preempted on both of these grounds: it would directly conflict with the CDA’s declaration that online platforms shall not “be treated as the publisher or speaker” of third-party content, 47 U.S.C. § 230(c)(1), and would manifestly stand as an obstacle to Congress’ purposes in enacting the CDA — particularly its goal of “encourag[ing] the unfettered and unregulated development of free speech on the Internet, and . . . promot[ing] the development of e-commerce,” Medytox Solutions, Inc. v. Investorshub.com, Inc., 152 So. 3d 727, 730 (Fla. 4th DCA 2014) (citation and internal quotations omitted).
State courts construing the CDA regularly rely on Zeran v. Amer. Online, Inc., 129 F.3d 327 (4th Cir. 1997), an early, influential, case that thoroughly analyzed the CDA and held that “[b]y its plain language, § 230 creates a federal immunity to any cause of action that would make service providers liable for information originating with a third-party user of the service.” Id. at 330. The Zeran court reasoned that, because “interactive computer services have millions of users,” “[i]t would be impossible for service providers to screen each of their millions of postings for possible problems,” and, “[f]aced with potential liability for each message republished by their services, interactive computer service providers might choose to severely restrict the number and type of messages posted,” which “would have an obvious chilling effect.” Id. at 331 (internal citations omitted). Nearly every jurisdiction has since adopted the reasoning of Zeran, based on the unambiguous statutory language of the CDA.
Yada yada yada.
Professor Volohk,
Thank you for your informative post.
Section 230 is of course key here, as are the major social media corporations acting within this arena. Moreover, we should be cognizant on how the internet has changed since section 230 was originally created, and realize that the law and regulations may need to be updated as technology matures and as people's use or abuse of the law matures.
Now point 1: Section 230 was originally written with a concept for limiting lewd, abusive, and/or spam from overloading the internet from users, and has largely been successful. It was not intended to selectively edit out viewpoints that were disagreed with. If a law or regulation is abused in a way that was not intended, then the law or regulation should be changed.
Point 2: Twitter (and the other major social media platforms) are more analogous to a phone company currently than a book store, in terms of their scale, patterns of use, and relative dominance in a given area. But they are acting as a "phone company" which is selectively editing/removing content
Point 3: To give an example: Imagine a phone company that had dominance over an area (as most do), but then decided to "end service" for people who it disagreed with. For example, let's say a group of people who supported Taiwanese independence, or supported Hong Kong's independence. Most people would view this a "wrong" and an abusive use of power. This, in essence is what Twitter and selected social media organizations are starting to do.
Point 4: While reversing the liability under 302 is one option, perhaps a better option is treating Twitter and other major social media organizations as pseudo-phone companies, with guaranteed service for all, regardless of political viewpoint.
I think it's entirely reasonable to think the law on this issue should change with changing circumstances. I don't think it's reasonable to believe executive order can change the law.
Yes, to the extent that people wish to change the law ...
then perhaps they should try and change the law. There's, like, a whole branch of our government that is designed to draft laws. Tip o' my tongue.
Here you go:
Stroke of the pen. Law of the Land. Kinda cool.
I've got a pen and I've got a phone.
FCC regulations are interesting things. And when internet is broadcast over 4G and 5G and LTE...well, there are options.
While I fully support § 230, I'm somewhat confused how congress can hand out immunity at its own discretion. For example, maybe conservatives have an affinity for talk radio, could they pass a law granting categorical immunity for talk radio against slanderous listener call-ins?
Yes?
Yes, although . . . originally, the federal government should have had no involvement in the vast majority of government matters such as this; it would have been a state matter.
This is exactly the type of thing that the federal government was going to get involved in.
It's federal preemption of state laws regarding a nascent interstate industry.
In other words, instead of letting the internet die the death of a thousand cuts (state lawsuits), they protected it.
Now, we have many world-class firms. Which Trump wants to strangle because they hurt his fee-fees.
Twitter and Facebook could vanish and the world and our society would be better off for it.
Like most of your pathetic whining, it reduces to "orange man bad, waah".
A comment from an anonymous user could make Twitter liable in New York but not liable in New Jersey as a publisher. If that was possible, and there was no federal standard, Twitter likely would not exist nor would the comments section on this blog.
Outstanding primer on the fundamentals of Section 230, but that is what I expected when I visited today.
I am of the view that Section 230 could use some fine tuning. I am also of the view that this is a matter for Congress and not for the executive.
This issue exposes some tension on the right. One the one hand you have the the laissez faire view of government regulation and capitalism. On the other hand, you have the desire to promote freedom of expression and speech within online platforms that may have market power, and prevent censorship of sensitive truthful information, alternative viewpoints, or important political speech and advocacy.
On the left, things are simpler. Shut down the opposition and silence them as much as possible, so as to prevent them from making their arguments and expressing their views. If tech platforms are on your side, use government threats to try and get them to censor even more. If tech platforms were biased against and censoring the left, then it would be use the government to stop the censorship. They are very consistent.
Are you sure you're not confusing the right with… anybody except the right?
47USC230 is all well and good but does it override 47USC202(a)? If not, then I would expect that Facebook and Twitter et al are going to get into more trouble from their practice of shadowbanning, outright banning, and discriminatory application of their rules to "to subject any particular person, class of persons, or locality to any undue or unreasonable prejudice or disadvantage."
https://www.law.cornell.edu/uscode/text/47/202
I don't think that anybody believes that Twitter is evenhanded. I don't think that anybody believes that Facebook is evenhanded. The algorithms they implement to limit communication are opaque and have been repeatedly subject to tests that demonstrate their bias.
The class action lawsuits over violations of their TOSs will be a joy to behold.
But the closest analogy to Twitter isn't the newspaper (which prints only a few hundred third-party letters-to-the-editor words a day), but either
• the bookstore or library (which houses millions of third-party words, which it can't be expected to screen at the outset) or
• the phone company or e-mail service.
As usual, I presume EV gets the law right. My criticism will be that Congress blundered when it defined its categories, and that EV, perhaps perforce because of his legal role, repeats the blunder.
The problem is, and has been, what defines publishing. The taxonomy EV posits does not get the job done. The damage Section 230 has done is to press freedom. Section 230 continues to erode Americans' support for press freedom, continues to undermine respect for press freedom, and continues to reduce Americans' willingness to tolerate a free press operating without government controls.
On what basis did Congress make such a consequential blunder? It failed to notice what about publishing most needed protection. It was an analytical problem. Congress confused relatively inconsequential styles of distribution on the one hand, with the essence of publishing activity on the other.
Two salient distinguishing features of a typical newspaper publisher are:
1. The publisher uses specialized means to expose material to a broad audience, some of whom may find the material by happenstance.
2. The publisher assembles the audience for its materials with an eye to making money, by selling access to the audience to advertisers.
That does not exhaust the list of what publishers may do. It does largely define the activity and business model the 1A was written to protect. And it does more. It suggests also the sources of unease which Section 230 introduced into the nation's publishing ecology.
Ad sales monopolism enabled by Section 230 wrecked the customary business model which 1A press protection had fostered. That happened because Section 230 enabled unlimited growth, by suspending liability for publishing defamation, even for publishers who did not read any content before publishing it worldwide.
Unlimited growth without proportionate increase in expense ushered a ferociously efficient new kind of competitor into the contest to sell advertising. But the cost of that efficiency was publication of a newly degraded kind of content—much of it never-before-publishable. It is content which the public variously distrusts, despises, or fears. The fears have too often taken the form of demands that government regulate publishing. Press freedom's standing with the public at large has never been lower.
Note that if you look at the quote from EV at the top of this comment, and check his category comparisons against the customary standards I suggest for recognizing newspaper publishing, EV's comment does not work. By the customary standards, the closest comparison to Twitter is the newspaper. Both Twitter and newspapers expose material to an audience which may find it by happenstance. Both assemble audiences with the aim of making money, by selling to advertisers access to that audience.
On that standard of comparison, neither the bookstore, nor the library, nor the phone company is near so alike with Twitter as the newspaper is. And the license which Twitter gets from Section 230 enables unlimited advantage during competition for ad sales. Against every category of publisher—including even online publishers—which either must, or chooses to read its material before publishing it, Twitter—which enjoys freedom from that burden—wields an insuperable advantage. So too with the other internet giants which Section 230 fostered into existence, and now protects.
Nobody gives a shit about your failed newspaper, Lathrop. Section 230 does no such thing. You'll note that the complaints being leveled about 230 by everyone except you is that it allows moderation, not that it allows too much speech.
My "failed newspaper" was cited for the basis of a lead story on the front page of the Guardian a few weeks ago—almost 50 years after I founded that wreck. It's thriving, as far as I can tell, unlike so many others.
Nieporent, one point I am making is that on this blog, almost everyone is blundering around without much notion of what publishing is. That is a mystery to me, why that should be, but it is a fact we have to live with.
Note that all the comments here focus on content, instead of on publishing practices. And those content-focused comments are mostly wacked. They are narrowly focused, trying by hook or by crook—with a little help from a court, or maybe now from the President—to get the government into the business of supervising publishing, and doing so on behalf of their preferred content.
That happens because these commenters do not like some of the content which gets published, or some private decisions about what to keep out. If the commenters know that private decisions on published content are not subject to government review, you cannot prove it by what they write here. The source of that confusion is almost entirely the gross distortion which demands in plain sight that the biggest publishers in the history of the planet be regarded as something else.
Typo, para 2: "posted by others Congress"
One of the conditions described noted that a platform like Twitter has a zillion posts a day and can't monitor every one of them. What I see twitter doing is posting a team of ideologues specifically on one user account, the Don's, and likely thus finding a reason to dis-allow anything he posts going forward. "Men can't have babies", for example, would now likely be marked false.
EV's post is the kind of expert content I've been here for for 15 years. The comment section retorts lifted from the Breitbart alternative universe, on the other hand, are lolworthy.
"Historically, American law has divided operators of communications systems into three categories: publishers . . . distributors . . . and platforms."
A resourceful counsel could argue that Twitter seeks to establish itself in a fourth category, wherein they assume the mantle of fact-checking, and actively (through humans or algorithms, i.e. rules created by humans but executed by machines) exclude the creations of participants they deem to be non-factual. That resourceful counsel could then claim Twitter no longer meets the criteria of any of the existing three categories, therefore enjoys no legal protections set out by Congress nor those found by the Courts for those categories.
They put themselves in a wasteland where no tree of Man's Laws will protect them.