The Volokh Conspiracy
Mostly law professors | Sometimes contrarian | Often libertarian | Always independent
Content Moderation, Social Media, and the Constitution
Two sets of cases - one already before the Supreme Court, one about to be - will go a long way towards defining the role of social media companies
The Supreme Court now has before it three issues of profound importance for the future of Internet speech.
- First up: How broad is the immunity, set forth in Section 230 of the Communications Decency Act, that protects Internet platforms against liability claims arising from content posted by third parties?
- Second: To what extent does the 1st Amendment protect the content-moderation decisions made by those platforms?
- And finally: To what extent may individual States impose controls over the content and conduct of Internet sites managed by out-of-State actors?
These are Big Questions for Internet law, and I'll have a great deal more to say about them over the next several weeks and months; consider this an introduction.
Regarding Section 230, as co-blogger Stewart Baker has already noted, the Court has agreed to review the 9th Circuit's decision in Gonzalez v. Google. The case arises out of the 2015 ISIS-directed murder of Nohemi Gonzalez in Paris, France. The plaintiffs seek to hold YouTube (owned by Google) secondarily liable, under the Anti-Terrorism Act (ATA)(18 U.S.C. § 2333), for damages for the murder:
"Youtube has become an essential and integral part of ISIS's program of terrorism. ISIS uses YouTube to recruit members, plan terrorist attacks, issue terrorist threats, instill fear, and intimidate civilian populations… Google's use of computer algorithms to match and suggest content to users based upon their viewing history [amounts to] recommending ISIS videos to users and enabling users to locate other videos and accounts related to ISIS, and by doing so, Google materially assists ISIS in spreading its message."
The 9th Circuit dismissed plaintiffs' claims, relying (correctly, in my view) on the immunity set forth in Section 230 (42 U.S.C. §230(c)(1)) - the "trillion dollar sentence, as I called it, or, in law prof Jeff Kossoff's words in his excellent book of the same name, "The Twenty-Six Words that Created the Internet":
No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.
The impact of this immunity on the growth of Internet communications platforms cannot be overstated; it is hard to imagine what the entire social media ecosystem would look like if platforms could be held liable for hosted third-party content. But the Section 230 immunity has become very controversial - to put it mildly - over the last decade; many commentators and lawmakers, from the political left, right, and center, have proposed substantially narrowing, or even eliminating, the immunity, blaming it for everything from the proliferation of hate speech and fake news to the supposed suppression of political commentary from the right wing.
By now, Stewart Baker suggests, "everyone hates Silicon Valley and its entitled content moderators [and] its content suppression practices." Gonzalez, he continues, signals that "Big Tech's chickens are coming home to roost, … the beginning of the end of the house of cards that aggressive lawyering and good press have built for the platforms on the back of section 230."
Maybe. I happen to be one of those people who do not "hate Big Tech's content moderation practices" - but I'll save my thoughts on that for a future analysis of the Gonzalez case.
The second set of cases (Moody v. Netchoice (Florida) and Netchoice v. Paxton (Texas)) raises a number of questions that are, if anything, of even greater significance for Internet speech than those the Court will be tackling in Gonzalez.
Florida and Texas have both enacted laws which, broadly speaking, prohibit social media platforms from engaging in viewpoint-based content-removal or content-moderation, and from "de-platforming" users based on their political views.(**1)
The 11th Circuit struck down Florida's law on First Amendment grounds - correctly, again in my view. The 5th Circuit, on the other hand, upheld the Texas statute against a similar First Amendment challenge. Cert petitions have been filed and, in light of the rather clear circuit split on a very important question of constitutional law, I predict that the Court will consolidate the two cases and grant certiorari.
The question at the heart of both cases, on which the two opinions reach opposite conclusions, is this: Are the social media platforms engaged in constitutionally protected "speech" when they decide whose content, and what content, they will disseminate over their systems?
The 11th Circuit held that they are.
"The government can't tell a private person or entity what to say or how to say it…. The question at the core of this appeal is whether the Facebooks and Twitters of the world—indisputably 'private actors' with First Amendment rights—are engaged in constitutionally protected expressive activity when they moderate and curate the content that they disseminate on their platforms."
The State of Florida insists that they aren't, and it has enacted a first-of-its-kind law to combat what some of its proponents perceive to be a concerted effort by "the 'big tech' oligarchs in Silicon Valley" to "silenc[e]" "conservative" speech in favor of a "radical leftist" agenda…
We hold that it is substantially likely that social-media companies—even the biggest ones—are "private actors" whose rights the First Amendment protects, that their so-called "content-moderation" decisions constitute protected exercises of editorial judgment, and that the provisions of the new Florida law that restrict large platforms' ability to engage in content moderation unconstitutionally burden that prerogative. [emphasis added]
The Fifth Circuit, on the other hand, in upholding the Texas statute (by a 2-1 majority, with Judge Southwick dissenting), held that the platforms are not engaged in "speech" at all when they make their content-moderation decisions (which the court labels as "censorship"):
Today we reject the idea that corporations have a freewheeling First Amendment right to censor what people say. . . .
The Platforms contend that [the Texas statute] somehow burdens their right to speak. How so, you might wonder? The statute does nothing to prohibit the Platforms from saying whatever they want to say in whatever way they want to say it. Well, the Platforms contend, when a user says something using one of the Platforms, the act of hosting (or rejecting) that speech is the Platforms' own protected speech. Thus, the Platforms contend, Supreme Court doctrine affords them a sort of constitutional privilege to eliminate speech that offends the Platforms' censors. We reject the Platforms' efforts to reframe their censorship as speech….
It is undisputed that the Platforms want to eliminate speech—not promote or protect it. And no amount of doctrinal gymnastics can turn the First Amendment's protections for free speech into protections for free censoring….
We hold that [the Texas statute] does not regulate the Platforms' speech at all; it protects other people's speech and regulates the Platforms' conduct. [emphasis added]
The split seems pretty clear, and I'd be very surprised if the Court doesn't see it that way and grant cert to clear things up.
As if the 1st Amendment questions in the Netchoice cases weren't difficult and complicated enough, there's another significant issue lurking here that make these cases even more intriguing and important. What gives the State of Texas the right to tell a Delaware corporation whose principal place of business is in, say, California, how to conduct its business in regard to the content it may (or must) publish? Doesn't that violate the principle that State power cannot be exercised extra-territorially? Doesn't the so-called "dormant Commerce Clause" prohibit the individual States from prescribing publication standards for these inter-State actors?
Those, too, are difficult and rather profound questions that are separate from the 1st Amendment questions raised by these cases, and I'll explore them in more detail in future posts.
Finally, one additional small-ish point, a rather interesting doctrinal connection between the statutory issues surrounding Section 230 in Gonzalez and the constitutional issues in the Netchoice cases.
The 5th Circuit, in the course of holding that content moderation by the social media platforms is not constitutionally-protected "speech," wrote the following:
We have no doubts that [the Texas statute] is constitutional. But even if some were to remain, 47 U.S.C. § 230 would extinguish them. Section 230 provides that the Platforms "shall [not] be treated as the publisher or speaker" of content developed by other users. Section 230 reflects Congress's judgment that the Platforms do not operate like traditional publishers and are not "speak[ing]" when they host user-submitted content. Congress's judgment reinforces our conclusion that the Platforms' censorship is not speech under the First Amendment.
Pretty clever! Congress has declared that the platforms are not "speaking" when they host user content, so therefore what they produce is not protected "speech." The platforms, in this view, are trying to have their cake and eat it, too - "we're not a speaker or publisher" when it comes to liability, but "we are a speaker/publisher" when the question is whether the State can tell them what to do and what not to do.
It's clever, but too clever by half. Section 230 was actually - indisputably - Congress' attempt to encourage the sort of content moderation that the 5th Circuit has placed outside the ambit of the 1st Amendment. It was enacted, as the 5th Circuit panel itself recognizes, to overrule a lower court decision (Stratton Oakmont v. Prodigy) that had held an Internet hosting service (Prodigy) secondarily liable for defamatory material appearing on its site. The Stratton Oakmont court reasoned that Prodigy, precisely because it engaged in extensive content-moderation, was acting like a traditional "publisher" of the 3d-party content on its site - exercising "editorial control" over that material - and should, like traditional "publishers," be held liable if that content was defamatory.
If engaging in content moderation makes you a "publisher" subject to defamation liability, the result, Congress recognized, would be a lot less content moderation, and Section 230 was designed to avoid that result. Not only does Section 230(b)(4) declare that the "policy of the United States" is to "remove [such] disincentives for the development and utilization of blocking and filtering technologies," it further provided that
"No provider … of an interactive computer service shall be held liable on account of any action voluntarily taken in good faith to restrict access to or availability of material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable."
Section 230 was expressly designed to encourage platform content moderation. The idea that Congress chose to effect that purpose by declaring content moderation and editorial control to be "not-speech" and thereby outside the protections of the 1st Amendment constitutes "doctrinal gymnastics" of the highest order.
*******************************
**1. The two statutes are substantially similar, though they differ in their details. The Florida law applies to "social media platforms," defined as
"Any information service, system, Internet search engine, or access software provider that provides or enables computer access by multiple users to a computer server, including an Internet platform or a social media site[;] does business in the state; and has annual gross revenues in excess of $100 million [OR] has at least 100 million monthly individual platform participants globally."
[The law as originally enacted, rather hilariously, expressly excluded any platform "operated by a company that owns and operates a theme park or entertainment complex," but after Disney executives made public comments critical of another recently enacted Florida law, the State repealed the theme-park-company exemption.]
The Florida law declares that social media platforms:
- "may not willfully deplatform a candidate for office";
- "may not censor, deplatform, or shadow ban a journalistic enterprise based on the content of its publication or broadcast";
- "must apply censorship, deplatforming, and shadow banning standards in a consistent manner among its users on the platform"; and
- "must categorize its post-prioritization and shadow-banning algorithms and allow users to opt out of them, and for users who opt out, the platform must display material in sequential or chronological order."
The Texas law is considerably broader: social media platforms "may not censor a user, a user's expression, or a user's ability to receive the expression of another person based on
- (1) the viewpoint of the user or another person;
- (2) the viewpoint represented in the user's expression or another person's expression; or
- (3) a user's geographic location in this state or any part of this state."
Editor's Note: We invite comments and request that they be civil and on-topic. We do not moderate or assume any responsibility for comments, which are owned by the readers who post them. Comments do not represent the views of Reason.com or Reason Foundation. We reserve the right to delete any comment for any reason at any time. Comments may only be edited within 5 minutes of posting. Report abuses.
Please
to post comments
So wait, does the Texas statute also prohibit the social media company from just blocking all IP addresses in Texas to avoid dealing with the crazy Texas law?
So a company not located in Texas in any sense is prohibited from *not* doing business in Texas? Just... what?
Yes.
It also forbids something similar to what social media companies do in Germany† illegal. They really were trying to regulate social media for the entire world.
________
†If you go to Twitter from a German IP address, you don't see Nazi stuff, as that's illegal in Germany. Cross the border and you see Nazi stuff again.
[quote][quote][…] does the Texas statute also prohibit the social media company from just blocking all IP addresses in Texas to avoid dealing with the crazy Texas law?[/quote]
Yes.[/quote]
Can you cite the part(s) of H.B. 20 that prohibit(s) social media companies from blocking IP addresses in TX (or anywhere else)?
Sure.
Sec. 143A.002. CENSORSHIP PROHIBITED. (a) A social media platform may not censor a user, a user's expression, or a user's ability to receive the expression of another person based on:
(1) the viewpoint of the user or another person;
(2) the viewpoint represented in the user's expression or another person's expression; or
(3) a user's geographic location in this state or any part of this state.
5th Circuit says you have to play with me!
"supposed suppression of political commentary from the right wing"
The Babylon Bee would like a word.
You may think that the platforms have a total right to suppress whatever content they want. I'd probably agree with you. To call the suppression "supposed" is ridiculous.
I would agree as well, but we wouldn't be here if Congress, and especially the Democratic presidential debate, hadn't screamed themselves purple to drop that $1 trillion sword of DammitDoWhatISay and you'd better censor harrassment, start with the harrassing tweets of our political opponent right before an election.
That is making quite a few causal assumptions.
And on a factual note, I don't recall a lot of the melodramatic turning purple you claim on this particular issue.
Yeah, that never happened, no matter how many times you and Brett claim otherwise.
What has happened though is that the Biden admin has pushed the big SM companies to shut down at least some of the speech of those who disagree with them.
Is a President trying to limit the 1A for certain people impeachable? If that’s not a High Crime, what the hell is?
Look, Biden could convince Zuck to do the Thanos snap and arbitrarily ban half FB's user base; none of that bears in anyway on anyone's rights under the first amendment.
The "supposed" is the idea that they target right wing views. Of course sometimes right wingers are suppressed. And sometimes left wingers are.
The fact that some content from the right wing gets taken down is indisputable. The fact that some content from the left wing gets taken down is also indisputable. So choosing random examples of stuff that has been taken down isn't very interesting as it doesn't tell us anything about either bias or the ability for various political causes to effectively reach their audiences either on these platforms or otherwise.
We do know that conservative content tends to have pretty extensive reach on all of the social media platforms (e.g., Fox News is consistently the number source of media reach on Facebook), so it's hard to take seriously the idea that conservative speech is systematically disadvantaged on the platforms.
Once the platforms start monitoring and censoring content, they will have the ability to silence true, but unpopular, speech.
They already moderate and censor content, otherwise they’d all be swamped with porn, spam and the worse possible scrapings of places like 4chan. Underpaid offshored content moderators end up with PTSD.
...say lawyers looking for a huge cut of a class action lawsuit.
Hope they get it, too.
Once the platforms start monitoring and censoring content, they will have the ability to silence true, but unpopular, speech.
Are you under the impression that they haven't already been doing that?
Platforms already do this to some extent.
These a private companies effectively operating in the publishing space and editorial control over content is within their rights.
This does not implicate the first amendment. If you do not want your "truth" legally censored by a private publishing resource, choose an acceptable competitor or become an acceptable competitor.
When the government “suggests” what is and isn’t disinformation, it absolutely implicates the first amendment. The issue isn’t the companies doing it, it’s the coordination with the government. The potential for rampant abuse should be obvious, even to you.
But I don’t think I’ll hold my breath.
IMHO this is not a subject that lawyers and courts should argue. It needs new legislation. A new law giving a modern meaning to “town square” would be welcome.
There is a law. It's the Communications Decency Act, Section 230. Just some people don't like what it says so they're trying to use the courts to undermine it.
If we declare private, online publishing resources a "town square," that has wide implications and a lot of room for unintended consequences. Also, wouldn't this effectively be a "taking?"
There already is an internet "town square" that's been there since the beginning.
It's called usenet.
Hundreds of thousands of discussion forums, anybody* can post anything they want, and anyone* anywhere in the world could potentially read it. Almost nobody uses it anymore *BECAUSE* it's overwhelmed with trolls, spammer, assholes, etc. Careful what you wish for.
*excluding residents of China and other repressive regimes.
FWIW, lots of Usenet groups were/are moderated as well. Even sites like 8kun or 4chan that are perceived to be less/not moderated still have per-forum moderation. The idea of an unmoderated Internet basically doesn't exist, and as you point out the closer you get to it the less useful it becomes to everyone since it becomes overwhelmed with garbage.
See https://reason.com/1995/08/01/the-malls-in-their-court/
Shopping malls are private. Courts said they had to allow protestors and could not censor them because of their effective town square status.
Ironically, shopping malls are going away. Where are the modern legal town squares?
That is not what "courts" said. The Pruneyard case, to which you seem to refer, ruled that a specific California statute that required the mall to allow people to distribute literature in a way that did not interfere with the mall's message was not unconstitutional.
Since reporting is now coming out that the government had a hand in deciding what was "disinformation" and socia media companies did as directed- that shoulld have some bearing on the final decision.
https://gellerreport.com/2022/10/tech-companies-incl-twitter-facebook-reddit-discord-wikipedia-microsoft-linkedin-verizon-met-monthly-with-fbi-dhs-and-other-govt-agencies-to-coordinate-censorship-operations-during-2020-ele.html/
Legal questions aside it was quite something to see how leftists operate, as they became obsessed with censoring, banning, silencing Trump and any conservative voices who were just too irritating, or objectionable, or who in some cases were just too inconveniently effective and factual — all while openly and consciously allowing terrorist organizations, dictators, murderers, pornographers and all other sorts to freely and openly use their services.
Why do you people get mad at this? A private company stood up to the almighty power of government and said if we don't want you, we don't have to have you. Literally told the government to get out of its business.
Mad? I think it's enlightening and important to point out.
Remember how before retreating to "why are you mad about this" you were all swearing up and down that "this" was not happening?
Who was denying that Trump was banned from Twitter?
Listen, he's not mad.
He's actually laughing, this is totally funny to him, you're the one that's mad.
Please don't print that he's mad.
I don't know Sarcastro, you always seem to think someone is mad without much evidence, it almost seems like projection.
Its not like ML was telling someone to keep fucking a chicken or anything.
Social media is more likely a private country club then a town square. You need a membership just to get on the grounds, private security enforces rules, and your membership can be revoked if you annoy the owners enough.
Forgetting this and insisting that it's a "town square" shows that you're an idiot.
And for that matter, standing on a street corner and handing out pictures of your dick to every woman and nine year old girl that walks past is a quick way to be hassled by the cops. So it's not even like the literal "town square" is as "free speech" as these idiot judges want you to think.
I like this analogy. It work well to focus the issues.
A country club, for instance, has rules of dress, behavior, and so on in order to maintain the atmosphere of the club to be something that the vast majority of its members want. That is what keeps them as members and paying for membership. Social media platforms need to be able to cater to the vast majority of their users in order to remain profitable.* If too much of a social media platform is seen as a toxic environment for the majority of users, then that platform will lose those users that don't like it and the business will suffer. We may see how this plays out in real time with Elon Musk's takeover of Twitter. Will users and advertisers stay with it if becomes as 'free speech' friendly as the right wants it to be?
*Though rather than paying a fee for membership, users essentially agree to be eyeballs for advertisers and for their email addresses to end up on advertisers' mailing lists.
The issue that David Post is becoming vaguely aware of after reading the Fifth Circuit, I pointed out long ago on this blog.
While legislation by Congress doesn't determine free speech principles or the meaning of 1A, the stubborn fact is you're going to have some difficulty trying to maintain that:
1) Twitter et al's content is their speech in every conceivable way that benefits them.
2) Twitter et al's content is not their speech in any way, shape or form, when it does not benefit them.
That's just not going to hold up in such an open-ended formulation.
Trying to have things both ways, or being hypocritical more generally, is so common in politics that it seems to be a feature, not a bug. It certainly isn't limited to those on one side of the political spectrum. For instance, any comment on how Florida's law exempted platforms owned by companies with theme parks until Disney (belatedly) criticized a law pushed through by the state GOP and said they'd stop giving them money?
That is true about politics. I was commenting more on the law regarding speech and the First Amendment.
Me inviting you to my dinner party to talk to my guests is my speech, not your speech.
You ranting about Transformers is your speech, not my speech.
Me dis-inviting you and showing you the door because of your Transformers rant is my speech, not your speech.
Alternatively, me giving tacit approval of your rant by inviting you back next week is also my speech, not your speech.
When you own the podium, who you give access to it is expressive speech in and of itself. So is who you cut-off, who you politely let finish but never invite again, and who you publicly denounce and blacklist. None of that means that the speaker's speech is your speech.
Sure, there's a possible distinction there, but this hardly clears things up. Say you adopt a policy on your platform of removing "election misinformation." You remove a slew of such alleged misinformation, while allowing the "true" information, let's say it is diametrically opposite claims refuting what was removed. But later it turns out some particular information you removed was true and the refutation you permitted was false. Perhaps you even helpfully shepherded traffic and pointed people to the false information to help explain. What then?
Damage to reputation int he public eye, no legal liability.
Look, I get why you're trying to weasel around Section 230 by making the speech you don't like "really" Twitter's speech. But that's bullshit sophistry and you know it.
You want to make Twitter legally liable for the stuff Twitter's users say? Get rid of section 230. But this argument that Twitter is legally liable for all of Trump's lies and defamation that he spread on the site before they banned him is just never gonna work.
It's not that difficult at all.
Congress could pass a law saying that libel laws do not apply to any words published in a newspaper. No one would argue that by changing the law to make libel not apply to newspapers, Congress was somehow making newspapers not a form of speech/press protected by the 1st Amendment. The issue of liability is completely orthogonal to whether something is speech or not.
First of all, Congress didn't do that even as to interactive computer service providers. Section 230 just says broadly that a service provider will not be "treated as the publisher or speaker" of any information provided by someone else. There is nothing specific to libel laws.
I've noted before that, under Section 230, I think the NY Times could be an interactive computer service provider at nytimes.com by firing its journalist staff and engaging them as 1099 independent contractors. Now the NYT is no longer the speaker or publisher. That means no liability. But also, if they truly are not the speaker or publisher, then they would not have standing to defend free speech rights as to the content because they are not the speaker or publisher. Of course, Congress cannot abrogate 1A rights, so the extent this action is contrary to 1A the plain text of Section 230 would be invalid.
Second, if Congress passed a law saying that libel laws do not apply to any words published in a newspaper, first of all Congress was not delegated any power in the Constitution to do such a thing, so they can't. But, let's say they were. Could the law be made to say that you have a free speech right to say whatever you want, and also that you will have no liability for what you say, even for classically libelous or defamatory statements? Sure, anything is possible I suppose. But that would seem difficult, stupid, and very unlikely.
This isn't nearly as difficult as you seem to think.
Congress can't diminish a Constitutional protection. Not directly, and not indirectly by statutory redefinition of terms.
Congress can modify the provisions of other statutes and the common law, including tort liability.
It is remarkable that the 5th Circuit believes that "Congress' judgment" is in any way relevant to the 1st Amendment rights of an interactive computer service provider.
"Congress can’t diminish a Constitutional protection."
I agree of course, and I stated as much in the comment you are responding to. However the view reflected in Section 230 was that your communications on AOL are or should be like those on the telephone: they are yours, not AOL's or AT&T's, your business and not their business. This view is arguable but has some logic to it. But the basic fact remains that if you are going to try and argue that some particular speech is your speech in every sense that benefits you and not your speech in every sense that burdens you, that seems like an uphill battle.
Netchoice doesn't claim that its moderation isn't its speech, quite the opposite. You have synthesized a deeper rationale for §230 - that when it immunizes service providers from liability for this speech it is really proclaiming it non-speech - and then finding fault with your creation. The reality is simpler - it is speech, and §230 immunizes them anyway.
"It is remarkable that the 5th Circuit believes that “Congress’ judgment” is in any way relevant to the 1st Amendment rights of an interactive computer service provider."
I'm not quite sure what to make about the 5th circuits decision saying that censoring others speech is not exercising their own speech, it's at least plausible.
But I'd like to see a state frame the question like this:
-The 1st amendment protects the free exercise of speach of both platforms and users.
- Section 230 says providing a service does not make a a platform a publisher, and only publishers can be held liable for others posts.
-By moderating content and exercising "editorial discretion" beyond deleting profanity, obscenity, doxxing, and a few other well defined exceptions, then the platform has assumed the liability of a publisher, and has forfeited their section 230 immunity.
Since as always states have control over their libel laws, and they are staying within the 4 corners of the 1st amendment and Section 230, then the major internet platforms can make their own decisions about keeping and voiding their Section 230 immunity.
You realize that this part is 100% a fabrication and not a part of the law as written or intended, right?
A 100% fabrication of what? It’s a proposal for states to clarify the safe harbor that section 230 provides to internet service providers.
Congress defined Section 230, and the courts have defined the first amendment.
The states have libel laws that the internet platforms, as service providers are immune to. I’m merely suggesting that the states make it clear when the Internet Platforms cross the line from pure service providers and assume the role of publishers, and forfeit their section 230 immunity.
The eleventh circuit has already suggested that they have crossed that line by using their first amendment protected “editorial discretion”.
There is no such thing as forfeiting section 230 immunity.
You forgot to underline and capitalize it. That would have certainly made it true.
There IS NO SUCH THING as forfeiting section 230 immunity.
EDIT: Sorry, I’ve tried several different things, and I’m not sure how to underline. The statement remains true, nonetheless. "Section 230 immunity" is not conditioned on anything. It simply is a proposition of law: websites are not responsible for third party content. Period. (There's no "unless they moderate too much" or "unless they're biased" or anything like that.)
Re: "well-defined exceptions" and losing immunity.
You seem to be focused on 230(c)(2)(a), which has the list of exceptions you reference. It has been discussed here many times, especially the meaning of "otherwise objectionable", and there are reasonable opinions (including Eugene's) that its protection is limited to certain types of moderation and a platform may by its actions put itself outside that protection.
But there is also 230(c)(1), that says platforms will not be treated as publishers or speakers of 3rd-party content. It is very broad, with no similar list of restrictions.
Some have argued that these provisions overlap, but I think they are distinct in at least this sense: 230(c)(2)(a) immunizes a platform for its acts restricting content, or access to content, while 230(c)(1) immunizes it for content it permits. These are two sides of the immunization coin, and when you talk about limitations and forfeiting protection it makes a difference which you are referring to.
What gives the State of Texas the right to tell a Delaware corporation whose principal place of business is in, say, California, how to conduct its business in regard to the content it may (or must) publish? Doesn't that violate the principle that State power cannot be exercised extra-territorially? Doesn't the so-called "dormant Commerce Clause" prohibit the individual States from prescribing publication standards for these inter-State actors?
Those, too, are difficult and rather profound questions that are separate from the 1st Amendment questions raised by these cases, and I'll explore them in more detail in future posts.
They seem neither difficult nor profound to me.
Suppose I have a politically oriented radio call-in show, which reaches listeners in several states. May any of those states tell me what principles I can use to screen callers?
"Suppose I have a politically oriented radio call-in show, which reaches listeners in several states. May any of those states tell me what principles I can use to screen callers?"
Well, if the state in question is Texas or Florida they think they can. (c:
But I think you misread what Post is saying. How the dormant Commerce Clause might apply is neither simple nor straightforward and is orthogonal to 1A concerns. See National Pork Producers Council v. Ross for instance.
Maybe, though doesn't the 1A take priority? If no government can tell me what to do then we don't have to worry about whether state government can.
I guess - don't know - that it might be an issue with obscenity or other unprotected speech, but otherwise?
Yes. The 1A argument is far stronger than the dormant Commerce Clause argument and should be sufficient. But it's not unheard of to present weaker arguments along with your strong ones when arguing before the court.
bernard11: They seem neither difficult nor profound to me.
Suppose I have a politically oriented radio call-in show, which reaches listeners in several states. May any of those states tell me what principles I can use to screen callers?
Here's why it's tricky. I'm assuming you think the answer to your question is, self-evidently, "no." I would tend to agree.
That is indeed, though, what Florida and Texas have, in effect, done here - the only difference being it's not the radio but an internet transmission.
The problem is: If Texas has a lemonade labeling law, which requires all packages of lemonade to disclose sugar content, and you ship lemonade into Texas without labeling sugar content, your lemonade can be seized and you can be fined for violating Texas law.
So: Is sending radio waves into Texas, or sending signals over the Internet into Texas, like sending lemonade into Texas? I don't think it is - but it's not easy to articulate why it isn't.
It depends om how you advertise yourself.
At best, that can lead to a false advertising charge.
Nothing more.
"The impact of this immunity on the growth of Internet communications platforms cannot be overstated; it is hard to imagine what the entire social media ecosystem would look like if platforms could be held liable for hosted third-party content."
Yes, but the issue is that ISPs do not have to behave like "platforms" as per sec230. Instead, ISPs are both shielded from liability for 3rd party defamation(as are platforms) and able to moderate content (as would a publisher).
The first amendment does not protect libel/defamation, so it is not the first amendment that prevents states from requiring ISPs to act as platforms, instead it is sec230 as interpreted so far by the courts.
There seems to be a lot of tension between the 11th circuit reasoning:
"content-moderation" decisions constitute protected exercises of editorial judgment..."
And section 230:
"No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider."
That safe harbor of just providing a service breached when the platform now is exercising editorial judgement in what appears on their platform? "editorial judgement" is a function of a publisher.
Section 230 was designed to allow platforms the ability to provide a service without the crippling expense of content moderation, but when the service provider voluntarily assumes the functions of a publisher, and asserts that defense when defending their decisions it seems to me now they are a publisher, and not just providing a service.
To be fair, Section 230 was not enacted to allow platforms the ability to provide a service without content moderation, it was enacted to allow platforms the ability to provide a service WITH some (arguably, limited, discrete types of) content moderation, while still not having speaker or publisher liability.
Completely wrong. This couldn't be more wrong if it were a horseshoe crab trying to perform an interpretative dance of James Joyce's Ulysses. Section 230 was designed to encourage platforms to engage in content moderation. That was its primary purpose.
It was both.
At that early stage of the internet platforms were gearing up, many on a shoestring. They had a choice: allow no user comments or forums, or allow fully moderated forums where every comment was approved in advance.
That of course was because you allow a comment to be posted that was libelous even for 15 minutes you could be sued.
Congress passed Section 230 in order to allow platforms to chose their own models: freewheeling like Reddit or 4chan, curated comments or something in-between.
And sure it encouraged modest moderation, because without it you were left with either no comments, pre-approved highly moderated comments, or totally open message boards with pretty much no responsible party to sue.
No, that's also not right. See Cubby v. Compuserve.
The choice that companies faced was to fully moderate or moderate not at all. Section 230 was specifically enacted to make a third model — partial moderation — viable, and thus encourage businesses to undertake it.
And, thanks to §230, platforms can exercise that function without incurring liability for moderated content.
Remember, 230 doesn't claim they aren't publishers, it says they won't be treated as publishers of that third-party content. You seem to think they are crossing a line and becoming publishers, but whether they do or not doesn't disqualify them from 230 protection.
Has the "ejusdem generis" principle been ruled out for Section 230? I don't understand why most commentators ignore it.
Actually, the impact of Section 230 immunity on the growth of Internet communications platforms is is quite easily overstated, since Cubby v. CompuServe had already decided that platforms could not be held liable for distribution of third-party content.
The only question is what impact did Section 230 have on enabling moderation policies in the face of the NY state court decision Stratton Oakmont v. Prodigy, which is the specific decision that Section 230 was added to the Communications Decency Act to overturn.
And the answer there is "probably not that much". After all, Cubby was based on Smith v. California (1959) limiting the obscenity liability of bookstores, and the idea that a bookstore's non-liability was voided by exercising as much discretion over the books it carried as Prodigy exercised over postings is fairly ridiculous. Stratton Oakmont was probably headed for being overturned (by a higher NY court or by the Federal system) even if the Communications Decency Act had properly been defeated.
And even if Prodigy's specific approach (which involved keyword pre-screening and kicking content to moderation before posts could be read by users) was too intrusive to avoid assuming publisher liability, most modern moderation systems are rather laxer, and thus easily could have fallen on the non-liability side of things, depending on how things shook out in the courts.
Worst-case, what would have happened is that we'd have wound up with more robust tools and APIs for user-level experience control (logical descendants of the Usenet newsreader "scorefile" systems already around in 1995).
No; worst case is that Stratton Oakmont was widely adopted, and sites with user generated content were smothered in their cribs. Stratton Oakmont didn't turn on how intrusive Prodigy's system of moderation was, but on its existence.
A bookstore's non-liability, like that of other distributors, is based on lack of notice. Once a bookstore is on notice of defamatory content in a book it sells, it is no longer immune from liability. In the context of the Internet, that would essentially require websites to take down any content someone reported to them as defamatory.
It's hard to see how anyone can look at the thousands of cases tossed on 230 grounds over the last quarter century and argue that 230 doesn't matter.
The same argument applies to newspapers that only publish a selection of the letters to the editor they receive, rather than publishing all of them.
There are currently 4.59 billion social media users worldwide. I think that no one will argue that YouTube is one of the most popular social media. A really strong platform for marketing your product services, for promoting a business, brand, store, even a butcher shop. There is a service called top4smm that can help you get noticed and promote yourself on YouTube. So it will not be difficult to acquire watch hours, views and likes. This firm has actually been operating in the video recording marketing business for greater than a many years and also possesses a solid credibility. It likewise gives 24/7 client help. They are actually famous for their very tailored service.Also, this company is going to certainly never request for your login details plus all deals are handled offsite.
You think the coordinated censorship between the Federals and the Silicon Valley Democrats is common?
I think it was a pretty bad thing that government officials were doing this. I think it's also legally irrelevant.
Which terms?
I founds lists of articles using a simple Google search that covers this in a lot of detail. Twitter even published its own in-depth breakdown of what, exactly, Trump did to get banned. And for months prior, Twitter placed warnings alongside tweets that violated its TOS explaining why each tweet was problematic.
Perhaps you'll find reading about it enlightening.
I think it was a pretty bad thing that government officials were doing this. I think it’s also legally irrelevant.
If the government was using Google to censor anti-government opinion, that seems relevant. Assuming that the 1st amendment blocks the government's ability to censor anti-government opinion.
So, which terms?
You may find it enlightening to learn what Twitter has allowed the Taliban, CCP officials, Ayatollah Khamenei, other Iran officials, Russian officials, Louis Farrakhan, Linda Sarsour, Antifa and BLM groups and individuals, and many garden variety leftists encouraging violence and spouting endless falsehoods, to post on its platform.
Perhaps you’ll find reading about it enlightening.
It's incredibly enlightening. It shows the incredible amount of TDS that allows people to suspend all critical thought because "Trump Bad".
He was suspended for Glorification of Violence. For 2 specifically referenced Tweets, neither of which remotely requests, mentions, incites, or has anything violent whatsoever. Anyone who reads those 2 Tweets so hard that they find violence should be committed. The rationale given by Twitter is laughably nonsensical. And people have TDS so bad that it scrambled their brains into accepting that nonsense. I feel bad for you.
You would need a *lot* more evidence if you're going to make a government agent argument re: Google.
A government official suggesting that certain topics are disinformation is not "using" those companies to censor anything. I don't think the government should do that, but it's not illegal. If the government coerces them to remove the information, that would violate the 1A.
First, here's Twitter's explanation of why the banned Trump, including the relevant Tweets and why they believe they violated the policy.
As for the rest of the folks you list, did they use Twitter to do bad things, or are they just bad people? Twitter's policies apply to the use of the service, not how you behave in the rest of your life.
You may find it enlightening to learn that Twitter does not ban users for being bad people; it bans users for misusing Twitter. Yes, foreign leaders who are dictators and murderers can use Twitter. As long as they aren't saying those things on Twitter.
Also, Trump was given far far far more leeway than everyone else on Twitter. He'd have been off years earlier if he hadn't been POTUS.
Are you still a lawyer by profession, ML?
Read my comment again. It is what Twitter has allowed those people to post, and has allowed to stand, that is interesting. Not who they are. Trump (and other conservatives) were given far less leeway than anyone on the left and just anyone who isn't the target of their political ire.
Well David, it would be interesting to hear how the NY Post for example was misusing Twitter by posting it's story on Hunter Biden's laptop.
Or maybe the Babylon Bee when they named Rachel Levine as Man of the Year:
"The Babylon Bee has selected Rachel Levine as its first annual Man of the Year. Levine is the U.S. assistant secretary for health for the U.S. Department of Health and Human Services, where he serves proudly as the first man in that position to dress like a western cultural stereotype of a woman."
And yes of course the Bee was trolling, just like USA Today was when they named Levine as one of their Woman of the Year, or when Biden appointed her the first four star "female" admiral.
Also, Trump was given far far far more leeway than everyone else on Twitter. He’d have been off years earlier if he hadn’t been POTUS.
What did he post that was worse than the AWFUL VIOLENCE he perpetrated in the 2 Tweets that got his account suspended? Surely, there's nothing worse than the actual physical violence of Tweeting "I will not be going to the Inauguration." The horrors. I think that awful violence probably killed billions of people. BILLIONS!!!
Y'all blinded by Trump. TDS got your brains scrambled like the "this is your brain on drugs" commercials.
Where? Yes, it's specifically the things those people have posted that are far worse than anything many "conservative" users have posted, by the standard of any terms of service, not who they are.
A government official suggesting to someone they regulate that certain people should not be allowed to peaceably assemble is unfortunate but it doesn’t impact the 1A.
A government official suggesting that someone break into a residence and steal information related to a crime isn’t good but it doesn’t impact the 4th amendment.
How am I doing? Let’s let them suggest their way around the bill of rights.
"Wow, this sure is a nice trillion-dollar social media company you have here. Sure would be a shame if something happened to it...."
That kind of suggesting?
Oops, sorry missed the link before:
https://blog.twitter.com/en_us/topics/company/2020/suspension
It would be interesting to see some examples of posts that you think should have result in suspensions from Twitter.
From the Intercept: “There is also a formalized process for government officials to directly flag content on Facebook or Instagram and request that it be throttled or suppressed through a special Facebook portal that requires a government or law enforcement email to use. At the time of writing, the “content request system” at facebook.com/xtakedowns/login is still live. DHS and Meta, the parent company of Facebook, did not respond to a request for comment. The FBI declined to comment.”
I think that clearly establishes a government agent relationship for Facebook and Instagram, although I think FB and IG policies probably allow some discretion.
You think this is censorship?
Because this looks to me to be more about like ISIS pages and child porn and the like.
You need more than 'the government can request a page be taken down' to prove Google acting as an agent of government censorship.
Not very well.
They don't talk about that kind of thing in the story, which would be a nothingburger of course.
This seems more problematic:
"In a March meeting, Laura Dehmlow, an FBI official, warned that the threat of subversive information on social media could undermine support for the U.S. government. Dehmlow, according to notes of the discussion attended by senior executives from Twitter and JPMorgan Chase, stressed that “we need a media infrastructure that is held accountable.”
"These soldiers are actually on leave and are covered under your Airbnb agreement..." (3rd A)
Cop: "You have the right to remain silent...
Def: "NOW WHAT had happened was..."
(5fth A)
"It's cruel, sure, but it's actually become somewhat commonplace..."
(6th A)
No. What do you think could "happen to" the company?