The Volokh Conspiracy
Mostly law professors | Sometimes contrarian | Often libertarian | Always independent
The First Amendment and Treating Social Media Platforms as Common Carriers
Another excerpt from my Social Media as Common Carriers? article (see also this thread).
[* * *]
I think this sort of common carrier rule [focused on the hosting function of social media platforms] would be constitutionally permissible, on the strength of three precedents:
- PruneYard Shopping Center v. Robins, which upheld a state law rule that required large shopping malls to allow leafleters and signature gatherers (a rule that has since been applied by some lower courts to outdoor spaces in private universities[113]);
- Turner Broadcasting System v. FCC, which upheld a statute that required cable systems to carry over-the-air broadcasters; and
- Rumsfeld v. FAIR, which held that the government could require private universities to provide space to military recruiters, alongside other recruiters.[114]
These cases, put together, establish several basic principles.
[1.] No First Amendment Right Not to Host
"Requiring someone to host another person's speech is often a perfectly legitimate thing for the Government to do."[115] So wrote Justice Breyer, and the cases he cited (PruneYard and Rumsfeld), as well as Turner, fully support that view. PruneYard expressly rejected the claim "that a private property owner has a First Amendment right not to be forced by the State to use his property as a forum for the speech of others."[116] Turner and Rumsfeld rejected similar claims.[117]
Even the district court opinion striking down the specific Florida social media access rules in NetChoice, LLC v. Moody noted that "FAIR and PruneYard establish that compelling a person to allow a visitor access to the person's property, for the purpose of speaking, is not a First Amendment violation, so long as the person is not compelled to speak, the person is not restricted from speaking, and the message of the visitor is not likely to be attributed to the person."[118] Likewise, I think, social media platforms may be made "a forum for the speech of others," at least as to their hosting function, and at least so long as the platforms (like the shopping center in PruneYard) are generally "open to the public" rather than "limited to the personal use" of the platforms.[119]
Rumsfeld also expressly rejected the claim that compelled hosting is a form of compelled association. The freedom of association protects an organization's right to refuse to allow someone to speak on its behalf, as the Court held in Boy Scouts of America v. Dale.[120] That freedom may entitle an organization to generally refuse "to accept members it does not desire."[121] But that freedom doesn't protect an organization's right to refuse to allow speakers onto its property:[122]
The law schools say that allowing military recruiters equal access impairs their own expression by requiring them to associate with the recruiters, but … a speaker cannot "erect a shield" against laws requiring access "simply by asserting" that mere association "would impair its message."[123]
In a sense, then, when it comes to statutorily created rights of access to social media platforms, the law would likely be much the same as what the Court held with regard to such rights of access to wire service stories in Associated Press v. United States:
[The First] Amendment rests on the assumption that the widest possible dissemination of information from diverse and antagonistic sources is essential to the welfare of the public, that a free press is a condition of a free society.
Surely a command that the government itself shall not impede the free flow of ideas does not afford non-governmental combinations a refuge if they impose restraints upon that constitutionally guaranteed freedom. Freedom to publish is guaranteed by the Constitution, but freedom to combine to keep others from publishing is not.
Freedom of the press from governmental interference under the First Amendment does not sanction repression of that freedom by private interests.[124]
The Court held this with regard to the Associated Press, an alliance of newspapers, but its rationale would also apply to one mega-company as well.
Thus we see that:
- Under Associated Press, though the government may not tell wire services what to write or what not to write, it may constitutionally choose to require them to share their intellectual property with others.[125]
- Under PruneYard and Rumsfeld, private property owners who open up their property to the public (or to some segment of the public, such as military recruiters) may be required by state or federal law to share their real estate with other speakers.[126]
- Likewise, a legislature may tell social media platforms that they must (at least in some contexts) share their online "virtual estate" with others, on the same terms that it offers other users.
If social media are "the modern public square,"[127] the law may constitutionally treat them (at least as to certain of their functions) the way physical public squares can be treated.[128] The New Jersey Supreme Court's rationale for adopting a public access rule much like the one the California Supreme Court adopted in PruneYard seems largely apt here:
The private [shopping mall] property owners in this case … have intentionally transformed their property into a public square or market, a public gathering place, a downtown business district, a community; they have told this public in every way possible that the property is theirs, to come to, to visit, to do what they please, and hopefully to shop and spend; they have done so in many ways, but mostly through the practically unlimited permitted public uses found and encouraged on their property.[129]
Turner did mention that cable systems "exercis[e] editorial discretion over which stations or programs to include in [their] repertoire," and noted that "must-carry rules regulate cable speech" in part by "reduc[ing] the number of channels over which cable operators exercise unfettered control."[130] But such a reduction in unfettered control wasn't seen as by itself posing a serious First Amendment problem: Turner rejected cable operators' "editorial control" claims as an argument for strict scrutiny[131]—and when it applied intermediate scrutiny, it didn't view the interference with editorial control as a basis for potentially invalidating the statute.[132]
Nothing in the recent Janus v. AFSCME,[133] which held that the government may not require government employees to contribute to unions, undermined these holdings. Janus didn't discuss Turner or PruneYard, and mentioned Rumsfeld only for the narrow proposition that "government may not 'impose penalties or withhold benefits based on membership in a disfavored group' where doing so 'ma[kes] group membership less attractive.'"[134] And the compelled contribution cases, of which Janus is the most recent, have drawn a line between compelling people to fund the views expressed by a particular private speaker (such as the union in Janus) and compelling people to fund a wide range of views expressed by a wide range of speakers selected on viewpoint-neutral criteria (such as the student groups in Board of Regents v. Southworth[135]). A requirement that platforms host speakers without regard to viewpoint would be more comparable to the requirement that compulsory student fees go to student groups without regard to viewpoint (Southworth), or the requirement that shopping malls host speakers without regard to viewpoint (PruneYard), than to a requirement that employees fund a particular advocacy group (Janus).[136]
Nor does Wooley v. Maynard support a general right not to host. In Wooley, the Court held that requiring car owners to display the motto "Live Free or Die" on their license plates is unconstitutional.[137] But in Rumsfeld, the Court concluded that compelled hosting generally "is a far cry" from that in Wooley,[138] even when a property owner (such as a university) is being compelled to allow a particular kind of government speech (such as military recruiting).
The Rumsfeld Court appeared to treat the Wooley mandate as involving a "situation in which an individual must personally speak the government's message," and not just a requirement that a "speaker … host or accommodate another speaker's message."[139] "Speak" is often used in First Amendment cases to refer not just to oral statements but to writing or to display of material: For instance, the Court in Cohen v. California viewed Cohen's display of "Fuck the Draft" on his jacket as "speech";[140] City of Ladue v. Gilleo described posting a sign in one's window as speaking;[141] and Wooley itself described Maynard's claim as involving "the right to refrain from speaking."[142] Likewise, the Rumsfeld Court seemed to distinguishing between (1) forbidden compulsion to display a message on one's car, which is closely associated with the "personal" speech of the "individual" motorist, and (2) permissible compulsion to host speakers in rooms within an institution's building, however obviously those rooms may be associated with the institution.
To be sure, then-Judge Kavanaugh took a narrow view of Turner in his dissent in the D.C. Circuit net neutrality case, arguing that, under Turner, "the First Amendment bars the Government from restricting the editorial discretion of Internet service providers, absent a showing that an Internet service provider possesses market power in a relevant geographic market."[143] (Judge Kavanaugh did not discuss PruneYard or Rumsfeld.) His view was that even Internet service providers could not be constitutionally required to carry content they wish to omit. Presumably he would apply the same logic to social media platforms, unless he changes his mind about the matter generally or about whether Facebook, Twitter, and YouTube have market power.[144]
Justice Kavanaugh may thus well disagree with Justice Thomas's openness to possible common carrier treatment of social media platforms.[145] Where the other Justices are on the subject, though, is not clear. And my argument in this Article aims at interpreting existing precedents, not at trying to predict exactly how each Justice would likely apply those precedents.
[In the next post: Miami Herald, Hurley, and more; or you can read ahead here.]
[113] PruneYard Shopping Ctr. v. Robins, 447 U.S. 74, 88 (1979); Commonwealth v. Tate, 432 A.2d 1382 (Pa. 1981); State v. Schmid, 423 A.2d 615 (N.J. 1980).
[114] The statute in Rumsfeld required universities to host military recruiters as a condition of getting government funds. But the Court declined to rely on that spending hook, and held that "the First Amendment would not prevent Congress from directly imposing the Solomon Amendment's access requirement." 547 U.S. 47, 59–60 (2006).
[115] Agency for Int'l Dev. v. All. for Open Soc'y Int'l, Inc., 140 S. Ct. 2082 (2020) (Breyer, J., dissenting, joined by Ginsburg & Sotomayor, JJ.). The majority, which took a less speech-restrictive position than Justice Breyer did, did not disagree with him on this.
[116] PruneYard, 447 U.S. at 86.
[117] Turner Broadcasting System v. FCC, 520 U.S. 180, 224–25 (1994); Rumsfeld v. FAIR, 547 U.S. 47, 70 (2006).
[118] No. 4:21CV220-RH-MAF, 2021 WL 2690876, *9 (N.D. Fla. June 30, 2021).
[119] 447 U.S. at 87. PruneYard also held that the California rule, under which shopping malls had to allow speech by members of the public, didn't implicate the Takings Clause. 447 U.S. at 82–85; cf. Szóka & Barthold, supra note 10 (suggesting that social media access mandates might implicating the Takings Clause). Later cases made clear that this was because "Limitations on how a business generally open to the public may treat individuals on the premises are readily distinguishable from regulations granting a right to invade property closed to the public." Cedar Point Nursery v. Hassid, 141 S. Ct. 2063 (2021). In this respect, social media platforms are again much like shopping malls.
[120] Boy Scouts of Am. v. Dale, 530 U.S. 640, 648, 653–54 (2000).
[121] Rumsfeld, 547 U.S. at 69 (quoting Boy Scouts, 530 U.S. at 648); Christian Legal Society v. Martinez, 561 U.S. 661, 680 (2010).
[122] Id.
[123] Id. (citations omitted).
[124] 326 U.S. 1, 20 (1945) (paragraph breaks added); see also Lakier & Tebbe, supra note 34.
[125] Id. at 4–5, 21.
[126] When I refer to the PruneYard doctrine, I'm referring to the U.S. Supreme Court's holding that states may, without violating the First Amendment, require shopping malls to allow people to speak on their property. I am not endorsing the California Supreme Court's earlier holding in the litigation interpreting the state constitution as securing such a right—that holding has only been followed by a few states, Batchelder v. Allied Stores Int'l, Inc., 445 N.E.2d 590 (Mass. 1983); Green Party v. Hartz Mountain Industries, Inc., 752 A.2d 315 (N.J. 2000); New Jersey Coalition Against War in the Middle East v. J.M.B., 650 A.2d 757 (N.J. 1994); Commonwealth v. Tate, 432 A.2d 1382 (Pa. 1981); Western Pa. Socialist Workers 1982 Campaign v. Connecticut General Life Ins. Co., 515 A.2d 1331 (Pa. 1986), and the U.S. Supreme Court has expressly held that the First Amendment itself does not require such a public access rule, Hudgens v. NLRB, 424 U.S. 507 (1976) (mostly recently reaffirmed in Manhattan Cmty. Access Corp. v. Halleck, 139 S. Ct. 1921, 1928 (2019)). But see Andrei Gribakov Jaffe, Digital Shopping Malls and State Constitutions—A New Font of Free Speech Rights?, 33 Harv. J.L. & Tech. 269 (2019) (arguing in favor of applying the logic of the California Supreme Court's PruneYard decision to social media platforms); Benjamin F. Jackson, Censorship and Freedom of Expression in the Age of Facebook, 44 N.M. L. Rev. 121, 145 (2014) (arguing that social media platforms should be viewed as state actors, because they "serve an important public function: providing a space that has the primary purpose of serving as a forum for public communication and expression, that is designated for that purpose, and that is completely open to the public at large"). Nor am I arguing in favor of applying Marsh v. Alabama, 326 U.S. 501 (1946), which recognizes a First Amendment right of access to the streets of a privately-owned "company town," to social media platforms.
Rather, the question I deal with is: If a legislature sets up a similar rule for social media platforms, would such a statute be constitutionally permissible? It is the U.S. Supreme Court's PruneYard decision, not the California Supreme Court's decision in that case, that most bears on that question.
[127] Packingham v. N.C., 137 S. Ct. 1730, 1737 (2017).
[128] For a similar analysis, see Philip Primeau, ESICA: Securing—Not Compelling—Speech on the "Vast Democratic Forums" of the Internet, 26 Roger Williams U. L. Rev. 160 (2021).
[129] N.J. Coalition v. JMB, 650 A.2d 757, 776 (N.J. 1994). I wouldn't use this as a basis for creating such a public access rule as an interpretation of a state constitution, much less of the federal constitution; but I do think this argument supports the view that a statutory right of access wouldn't violate the First Amendment rights of the platform owners, just as it doesn't violate the First Amendment rights of mall owners.
[130] Turner Broad. Sys., Inc. v. FCC, 512 U.S. at 636–37; see also Bhagwat, supra note 71, at 3–4.
[131] 512 U.S. at 653–57.
[132] Id. at 664–68. Neither did the Court in 1997 when it finally confirmed that the statute passed intermediate scrutiny. Turner Broad. Sys., Inc. v. FCC, 520 U.S. 180, 215–26 (1997).
[133] 138 S. Ct. 2448 (2018).
[134] Id. at 2468.
[135] 529 U.S. 217, 221 (2000).
[136] For more on this, see Eugene Volokh, The Law of Compelled Speech, 97 Tex. L. Rev. 355, 373–75 (2018).
[137] 430 U.S. 705, 717 (1977).
[138] 547 U.S. at 62.
[139] Id. at 63; see also PruneYard, 447 U.S. at 87 (distinguishing Wooley in part on the grounds that Wooley involved "personal property that was used 'as part of [the car owner's] daily life'").
[140] 403 U.S. 15, 18, 19, 21, 22 (1971).
[141] 512 U.S. 43, 56 (1994).
[142] Maynard, 430 U.S. at 714.
[143] U.S. Telecom Ass'n v. FCC, 855 F.3d 381, 418, 426–31 (D.C. Cir. 2017); see also Raymond Shih Ray Ku, Free Speech & Net Neutrality: A Response to Justice Kavanaugh, 80 U. Pitt. L. Rev. 855 (2019); Joel Timmer, Promoting and Infringing Free Speech? Net Neutrality and the First Amendment, 71 Fed. Comm. L.J. 1, 15–22 (2018). For more on the First Amendment and net neutrality, see Stuart Minor Benjamin, Choosing Which Cable Channels to Provide Is Speech, but Offering Internet Access Is Not, Volokh Conspiracy (May 1, 2017, 6:22 pm), https://perma.cc/R5F7-3FS6; Stuart Minor Benjamin, Transmitting, Editing, and Communicating: Determining What 'The Freedom of Speech' Encompasses, 60 Duke L.J. 1673 (2011); Susan Crawford, First Amendment Common Sense, 127 Harv. L. Rev. 2343 (2014).
[144] U.S. Telecom Ass'n, 855 F.3d at 433.
[145] Biden v. Knight First Am. Inst., 141 S. Ct. 1220, 1224 (2021) (Thomas, J., concurring). Note that, though Justice Thomas joined Justice O'Connor's dissent in Turner, 512 U.S. at 674, he did so because he thought the must-carry law was content-based, and expressly declined to join the part of the opinion that reasoned that the law was unconstitutional even under the scrutiny applicable to content-neutral restrictions. Indeed, Part III of that opinion, which he did join, (1) drew the link between certain content-neutral access mandates and PruneYard, and (2) suggested "that if Congress may demand that telephone companies operate as common carriers, it can ask the same of cable companies." Id. at 684.
Editor's Note: We invite comments and request that they be civil and on-topic. We do not moderate or assume any responsibility for comments, which are owned by the readers who post them. Comments do not represent the views of Reason.com or Reason Foundation. We reserve the right to delete any comment for any reason at any time. Comments may only be edited within 5 minutes of posting. Report abuses.
Please
to post comments
Off topic, but would Janus apply to defunding schools that teach CRT against the wishes of parents and taxpayers?
Apply in what sense? School education is government speech. There's not usually a "parents and taxpayers" basis for standing to sue over government speech.
Welcome to China then?
CRT is emotional abuse of white children. It scapegoats white Kindergarten students for the effects of slavery after 150 years, which is none. All diversity training in the public schools should flood the child abuse hotline with complaints. They will demand the name and personal information of a specific child. Have that prepared ahead of time.
Wouldn't it be a lot simpler to focus the argument on the interpretation of "good faith" in the already-existing version of Section 230?
Based on the context surrounding its adoption, I think a reasonable argument could be made that "good faith" content moderation is limited to only the sorts of standards that enjoy broad CONSENSUS in society (i.e., stuff that almost everyone would regard as bullying or harassing; NOT typical right-vs-left debates about what might be deemed hateful, bigoted, etc.).
While that would be far short of any First-Amendment-level of protection for social media users, it would at least ensure that "right-left" differences - on debates about fact OR opinion - aren't the sort of thing that can be moderated by social media (without losing 230 protection).
I don't think that "enjoying a broad consensus in society" is consistent with the usual legal meaning of "in good faith". The former is more consistent with the ejusdem generis argument about "otherwise objectionable". Good faith would be about restricting content in biased ways, or in ways that are not consistent with the wording of the rules.
"...action voluntarily taken in good faith to restrict access to or availability of material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable, whether or not such material is constitutionally protected"
Admittedly it's not a slam-dunk, because it does say "that the provider CONSIDERS to be..." But taken as a whole, the list of criteria there seems to indicate that we're talking about truly aiming to enforce consensus concepts here, rather than censoring live matters of public interest/debate (including debates about what does or doesn't constitute those very things...).
No, I think the statute is very clear that it's taking about "material that the provider or user considers to be obscene, lewd", etc. It is about those two parties' judgment, not the broader public's opinions. That's why I think the ejusdem generis argument -- that "otherwise objectionable" needs to be for a reason that is fairly close to the explicitly listed ones -- is a better basis for arguing that social media companies have gone beyond what Section 230 protects.
As a general matter, trying to replace that personal judgment standard with some kind of community consensus standard seems likely to run aground on the shoals of vagueness.
The problem is that it has to be a good faith decision that the content is "obscene, lewd", and such.
When Youtube restricts access to a Prager U video on the 10 commandments, because "murder" is mentioned, it's rather questionable that this is a good faith action, that Youtube actually thinks the video fits into the categories intended.
100%
It's actually considerably easy to see a non-conspiratorial reason as to why a huge enterprise like Youtube might erroneously flag a video because it has the word murder in it. It's kind of a classic failure.
It's a little bit harder to see a non-conspiratorial reason they'd be erroneously flagging 25% of Prager videos, though.
Citation?
From what I've seen of Prager videos, I'd be surprised if it was only 25%.
But that aside, the PragerU Wikipedia page has a list of the issues Prager has had with its videos. Google places them in "restricted mode filtering" which limits viewership based on things like age. The videos tend to be filled with large amounts of misinformation, but lying and being wrong aren't, in and of themselves, good reasons to moderate videos.
"...lying and being wrong aren’t, in and of themselves, good reasons to moderate videos."
Lying and being wrong are not illegal and should not be illegal (other than specific prohibitions on lying such as fraud)
But do you really want the government to REQUIRE private companies to provide a soapbox for things that they think are "lying" and "wrong"?
That's what we're talking about here. If somebody wants to spread lies and misinformation there is a really big internet out there that will allow them to spew their crap. Why anyone thinks YouTube or facebook or Reason.com should be forced by the government to help spread lies and misinformation against their will is a mystery to me. Other that the creeping authoritarianism and anti-libertarianism that seems to pass as "conservative" these days.
"But do you really want the government to REQUIRE private companies to provide a soapbox for things that they think are “lying” and “wrong”?"
If we're going to be requiring things of private companies in the first place? Sure. I've seen what the WaPo considers a "lie" when Trump says something. I've seen the lengths Snopes can go to to declare a claim they don't like untrue.
The idea that judgements of "lying" and "wrong" are being made even remotely objectively is a joke.
Setting aside the nihilism underlying your comment — that we can't tell whether something is objectively false if Trumpkins are willing to lie about it — it's not even responsive. He said "things that they think are 'lying' and 'wrong.'"
No, it's obviously not questionable that it's in good faith. If this hypothetical thing happened, it would be the result of an automatic moderation setting that flags content that mentions murder. And while that's a stupid setting, it's not a bad faith attempt to get PragerU.
(And, again, Restricted Mode does not restrict access to videos; individual users' choices to use Restricted Mode does.)
You do realize that moderation algorithms are actually written, by people? And almost uniformly have extensive provisions for white and black listing, to correct mistakes?
Right- some of the original architects of the legislation specifically noted that they wanted to allow owners of sites to create the atmosphere that they preferred. Do you want a forum for conservative religious people to discuss issues without dealing with constant harassment from Atheists and liberals? They wanted owners to have the freedom to moderate exactly that way.
I don't like that social media is slowly being dominated by editors with a specific set of biases, but nevertheless, those same people will turn "must carry" rules around and use them against their ideological enemies. As those people create their own spaces to discuss, they will use the prohibitions on moderation to destroy those spaces as well.
This. Overt.
Online space isn't constrained the same way that physical space is.
There can be an infinite variety of social platforms, some which are peopled only by religious people discussing religious things without, as you say, harassment from atheists, other religions, etc. and some that keep out nutty political conspiracists or offensive ideologies.
You can't have these moderated comments on Volokh and also stop Facebook allowing only content that is consistent with the community they are trying to create. And I like that, if these comment sections suddenly lured a bunch of Nazi-loving, genocidal idiots that Eugene would...oh, well. But he could kick off people who openly call for genocide. And I like that he could. Just as I like that generally, he allows people to say anything. But this blog has a different mission than Facebook and that is okay. Better than okay. That's freedom.
Ah, but of course Volokh and other such sites could have whatever standards they wish - but just perhaps not the 230 protection. Be a publisher like a newspaper or network, or be a platform...
It's only when a site wishes the *special* protection that 230 offers, that they would need to limit themselves in the manner I suggest...
There could be a lot of social platforms, but network effects mean that most of them will never compete with the big ones. If a platform wants to exploit those network effects to be huge, it is harmful to the public if it uses that position to also suppress viewpoints that it has a proprietary dislike of. If it wants to enable limits in smaller groups, it could provide those tools -- but Section 230 should not protect it when it applies its own standards to two-thirds of the country.
"There could be a lot of social platforms, but network effects mean that most of them will never compete with the big ones.
Is this why MySpace is still so dominant?
The real issue here would be the bait and switch. FB never would have grown a fraction as large if they'd been up front about a desire to engage in political censorship. And they didn't engage in it during their growth phase, they waited to do it until they were already the dominant player.
Likewise for Twitter. And Youtube. None of these platforms were notorious for political censorship while they were growing, they only started doing it AFTER they were dominant.
Of maybe reports that it was doing so only got notice more after it was big?
No, Facebook's stupidly censorious decisions early on were actually consistent with what Section 230 allows -- they would ban things like pictures of breastfeeding because their moderators thought that was lewd or obscene. They weren't trying to punish wrongthink.
Based on the context surrounding its adoption, I think a reasonable argument could be made that “good faith” content moderation is limited to only the sorts of standards that enjoy broad CONSENSUS in society (i.e., stuff that almost everyone would regard as bullying or harassing; NOT typical right-vs-left debates about what might be deemed hateful, bigoted, etc.).
You exemplify the practical problem. Any attempt in Congress to make social media a common carrier, would doubtlessly be amended to include bans on hate speech. Such a ban would be the spoiler rendering the whole thing (sans severance) as violating the 1st amendment.
We need to say it again and again. It is the most offensive, the most hateful speech that needs 1st amendment protection. Consensus speech needs not protection.
" It is the most offensive, the most hateful speech that needs 1st amendment protection. Consensus speech needs not protection."
Agree 100%. I was addressing a non-first-amendment-based argument there, but I'm also quite open to the argument that social media has become a "company town" of sorts - in which case they'd be bound by the 1st Amendment like a state actor...
Also, just to clarify - even for my 230-based argument: I wasn't saying only "consensus speech" would be protected there. Rather the converse - i.e., there would need to be a consensus AGAINST the particular speech in question in order to moderate it. So if any significant portion of society disagrees, then no dice. (And again, this is quite distinct from a first-amendment analysis - where of course even the most fringe speech is indeed protected.)
The original context of Section 230 was that providers of services that permit members of the public to share their views with the public at large should not create a duty for the provider to police that content. In that time it made sense due to limitations of the technology available and the immense resources that would have been required to vet every single post.
Today it is possible to monitor posts in real time and "filter" based on any number of characteristics and therefore place a "thumb on the scale" of debate by deleting some content over other content even to the extent of promoting content on the fringes to discredit one set of views.
The "good faith" in this context should be seen as a light handed approach and not used to "protect" people from disfavored views. The list in Section 230 seems a good starting point for the kinds of things that might reasonably be moderated. Just a year a go the Wuhan Lab Release Theory was considered by many to be a "conspiracy theory" but more recently has come to be seen as a possibility which should be more carefully investigated, if only to eliminate it.
With regard to net neutrality that generally should be seen as an anti-wiretap feature preventing the service provider from monitoring a users activity and then routing the user to a favored provider say Bing over Google or providing better service to some destinations than others - like providing priority access to Amazon, provided Amazon paid for "enhanced" access.
"The original context of Section 230 was that providers of services that permit members of the public to share their views with the public at large should not create a duty for the provider to police that content."
Right - so either be a platform and don't worry about moderating comments, or be a publisher and give up the special 230 immunity. But choose.
"The “good faith” in this context should be seen as a light handed approach and not used to “protect” people from disfavored views."
Right, nobody is protected from disfavored views - but my point was that users SHOULD be protected from *biased standards* in how the moderation occurs (if the provider wants 230 immunity).
So you want to repeal § 230. Have you thought through what would happen if you did?
If they want to ensure Court enlargement, the conservatives who decided Morse v. Frederick (Bong Hits 4 Jesus) should rule that Twitter and Facebook must publish bigoted, delusional falsehoods and calls to violence.
"Twitter and Facebook must publish bigoted, delusional falsehoods and calls to violence"
They do that already Rev. They're just liberal delusional bigoted falsehoods and liberal calls to violence.
Terrorist attacks, doxxing, vicious intimidation and protests at homes, have all been orchestrated on Facebook. No problem for the Rev.
These have all been orchestrated on literal paper, too.
And just as effectively as on social media. And with the same kinds of authorities turning a blind eye and silently approving. But actually, I think that most extremists today put Mein Kampf down a while ago.
Kirkloon still hopes for court enlargement because other enlargements have irrevocably ceased in his life.
Volokh's treatment of this topic is thorough and thoughtful. Unlike Somin's take that it's a "slippery slope".
It is one thing to have some sort of immunity through Section 230 IFF the "rules" are clear and well communicated AND free of government or political party influence. This precludes any notion of making it up as we go along.
Another problem is the whole concept of "safety". Do you ever notice that the censorship comes under the guise of "Trust and Safety"? It's absurd, especially when it is trust in the Democrat narrative or else.
" trust in the Democrat narrative "
Illiterate, bigoted, and superstitious is no way to go through life, clinger.
Time to renew my theory that the entity known as Kirkland is in fact a jar filled with the gall bladders of failed dictators that somehow gained sentience.
No way a human maintains such a consistent level of bitterness and disdain for critical thinking for this long.
You figure "Democrat narrative" is critical thinking? That's silly even for the average disaffected right-wing culture war loser.
Nobody figures that democrat narrative is critical thinking. Unless critical refers to the condition it is in.
I agree that Volokh's analysis is useful, and in the correct direction. He still fails to be against the setting of national policies by know nothing bookworms on the Supreme Court, when they are not qualified to do that. Congress must do that in accordance with Article I Section 1.
"If social media are "the modern public square," . . . .
We simply cannot make that assumption.
The public square notion concerns limited, physical areas where the owners had to decide who to allow access to speak.
The PruneYard mall is (was?) a real, physical space; a person had to physically be there to make a speech that is constitutionally protected.
That's why PruneYard has (had?) to follow constitutional standards when deciding who can use their LIMITED physical space.
PruneYard doesn't apply if the person is standing in a corn field 1,000 miles away.
There are zero physical limitations on Facebook, etc., and therefore Facebook does not have to worry setting constitutional-compliant rules about who can use their "space" - again since there is no physical limitation.
This actually cuts against Facebook and Twitter.
One of the concerns about the public space is that there is only so much space. If you've got a public square that is 100 square yards, you can't literally have everyone speak there at once.
Facebook and Twitter don't have that concern. Their "space" is virtually unlimited.
I don't get your logic.
You just said the same thing I did but came to an opposite conclusion.
If you have limited, physical space (the mall), then yes, you should create CONSTITUTIONALLY-COMPLIANT RULES about who/when a person can speak.
Facebook has zero physical space so they DO NOT have to create CONSTITUTIONALLY-COMPLIANT RULES about who/when a person can speak.
Yes, but your conclusion is illogical Captain. The reason the small space may require constitutionally compliant rules for who can speak, is because the space may require some rules for sharing out the limited space. The bigger the space the smaller the need for any such rules.
"The bigger the space the smaller the need for any such rules."
Isn't that what I said?!?
"Facebook has zero physical space so they DO NOT have to create CONSTITUTIONALLY-COMPLIANT RULES about who/when a person can speak."
I think you're talking past each other. Armchair and Lee are saying that traditionally public spaces had to create constitutionally-compliant rules about how they restrict who can speak and when but that the mechanics of space and time limitations meant that whether they could restrict was presumed. They are further saying that because Facebook has no such physical limitations, there is first a question of whether Facebook can restrict at all. If the answer is no, then you don't even get to the question of how Facebook might constitutionally restrict.
Good summary of the talking past each other.
But Armchair and Lee are making a very authoritarian argument that no one can create whatever they consider is a "safe" space on the internet. Want a community of knitters whose message board isn't inundated with Nazi propaganda? Too bad if the Nazis find your space appealing or they just want to litter it with Nazi propaganda (and this problem will become worse as AI gets more sophisticated, you could speak on every corner of the internet no matter how unwelcome you and your message is). The knitters will have to go somewhere else until chased from that space. Ad infinitum.
Apedad is exactly right. The reason Facebook shouldn't have to make room for everyone is because everyone can make their own room. It is better that we can have rooms that are on topic and have rules of decency decided by the proprietor rather than by a government bureaucrat precisely because anyone can build a new space if they don't like the limitations at Facebook. The free market can decide what level of management is appropriate for various communities. Facebook's will be different from the knitters' whose standards will be different from the Christian apologists' whose standards will be different from the humanists' standards.
Lee and Armchair are advocating for anarchy on the internet, nobody gets to control anything. (Or alternatively or quasi-simultaneously, government controls everything.) That's not good. And they won't like it if they get it.
I would add that citing to PruneYard, wadr to our host, is a bad faith argument. PruneYard is a terrible decision that was the product of its time that courts have spent the last thirty years restricting to its facts, resisting any attempt to expand or apply it beyond its narrow holding. It would almost certainly not come out the same way today.
Are you suggesting that Justice Kagan will decline to die in a ditch defending Pruneyard ?
"...advocating for anarchy on the internet, nobody gets to control anything. (Or alternatively or quasi-simultaneously, government controls everything.) That’s not good. And they won’t like it if they get it."
More or less sums up the entire argument of the anti-230 crowd. Nice job.
I should have thought that distinction points us in precisely the opposite direction.
Imagine two public squares - a cute little square 25 feet by 25 feet, and the Piazza del Campo in Siena. The smallness of the small square increases the probability that one person's speech will interfere with another's. The largeness of the large square makes it very unlikely that one incremental speaker, or even twenty, will have any effect on anybody else. The more limited the space, the better the owner's case that it has to ration the use of it. So if there's a case for distinguishing squares, it would be to pruneyard the bigger ones.
I think you're mostly struggling with the concept of a virtual public square, rather than a physical one. This seems to me analogous to doubting that a discussion is going on here, with electronic dots, because we are not in the same room, flapping lips.
But the virtual public square is virtually UNLIMITED.
I (the platform owner) don't have to worry about rationing space and time.
Right, they don't have any such worry, they're not realistically going to run out of space. But, you said:
“Facebook has zero physical space so they DO NOT have to create CONSTITUTIONALLY-COMPLIANT RULES about who/when a person can speak.”
I think the question is what you mean by not having to create constitutionally compliant rules. That they don't have to create rules, or that they don't have to be constitutionally compliant?
Lee understands you to be asserting that they're free to create rules that AREN'T constitutionally compliant. That you're claiming they, because they have no shortage of space, ARE free to censor.
That's how I understand what you're writing, too.
Not quite, Brett. I'm not claiming that it's unconstitutional for Facebook to create any rules it likes for censoring speech on their forum. The Pruneyard question is whether the government - state or federal - can, without breaching Facebook's 1A rights, override Facebook's (or any other property owner's) censorship rules, by requiring it to host speech it doesn't want to host.
apedad's argument is that the size of the space makes a difference. I agree. The smaller the space, the better the space owner's argument that such a government override would infringe its own ability to speak, or the ability of others who the space owner is willing to host. And so the smaller the space the stronger the space owner's 1A argument against government compulsion.
And so Facebook, with unlimited space, has no 1A defense.
Right, I see you and apedad arguing the size issue cuts in opposite directions.
Personally, I don't like Pruneyard. Just because a property owner lets one person onto their property, doesn't mean they should be compelled to let other people onto their property, and just because they're not incredibly restrictive about what guests do, doesn't mean they should lose their right to tell guests to leave based on what they do.
I wouldn't mandate that platforms be common carriers. But I'd take away a great deal of the privileges Section 230 gives them if they refused that status. Because Section 230 doesn't guarantee rights, it grants privileges.
"Because Section 230 doesn’t guarantee rights, it grants privileges."
This is bizarre to me. 230 doesn't grant privileges. It declines to assign responsibility for others' actions. That isn't a grant of privilege any more than a tax deduction is a "grant" of money.
Under any normal perpetuation of common law, a business owner is not responsible for what happens on their property. If you and I plan a crime at a bar, the owner isn't responsible. And if he has a policy of kicking out religious nuts, that has never meant that he is suddenly a conspirator to our crimes.
230 merely clarifies this basic truth- when someone says something, THEY are responsible for that speech, not the owner of the forum where they spoke.
"This is bizarre to me. 230 doesn’t grant privileges. It declines to assign responsibility for others’ actions. That isn’t a grant of privilege any more than a tax deduction is a “grant” of money."
It grants the privilege of not being treated as the publisher of user content on one's own platform. And not being sued for limited sorts of moderation.
Not having responsiblity assigned for you where that would have been the default is indeed a legal privilege.
Personally, I don’t like Pruneyard. Just because a property owner lets one person onto their property, doesn’t mean they should be compelled to let other people onto their property
Sure, that is a proper defense of property rights. I am going no further than doubting that SCOTUS got Pruneyard wrong. Being required to "host" someone else's speech is not, in principle, a restriction on your right to speak, nor on your right not to speak.
Unless, in practice, it is - eg by absorbing speaking space, and time, you want to use yourself.
Sigh. I don't know how many times in how many ways I can convey to you that this is gibberish. "I will take away the protections of § 230 unless they agree to operate in such a way that they don't need those protections" is a nonsensical position.
• Before § 230 was enacted, a provider had a choice: no moderation+no liability, or moderation+liability.
• The entire point of § 230 — the sole and only function it serves — is to obviate the need to make that choice. Providers can moderate and still not have liability.
• Common carriers by definition don't moderate, so that protection does nothing for them.
Your position is like saying, "I will only give qualified immunity to police officers if they obey the constitution." Whatever one thinks about the wisdom of QI, this is a vacuous position. Officers who obey the constitution don't need QI.
" Before § 230 was enacted, a provider had a choice: no moderation+no liability, or moderation+liability."
It was my understanding that the first option was created by (c)(1):
"No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider."
Without (c)(1), you'd still have liability without moderation, because you'd be treated as the publisher of anything on your site, and could be sued for defamation if somebody posted defaming content on your site.
(c)(2) then protects you from liability for moderation decisions, so long as they're taken in good faith, for the specified reasons.
Without Section 230, you had liability coming and going: Liability for anything on your site, and liability for taking anything off your site. Without (c)(2), the only safe option would be refraining from moderation.
Sigh. This is why I keep trying to tell you that you're out of your depth in discussing law. And yet you don't listen, and just continue to spout the same wrong ideas. Your understanding is incorrect.
No. That's why I keep trying to explain to Lathrop that he's wrong when he calls these sites publishers. They are not publishers. They are distributors. See Cubby v. CompuServe. Distributors — newsstands, bookstores — are only liable if they have actual knowledge that they are distributing defamation and refuse to stop doing so.
They only had liability as publishers if they actually moderated. See Stratton Oakmont v. Prodigy.
But… to repeat what I said yesterday: they have to moderate. It's just not a viable business model to have a moderation-free website.
Like reason. Not viable.
So, you're arguing that (c)(1) is redundant, and could be repealed without effect, because it is absolutely indisputable that social media platforms are just distributors?
Got it.
My point was focused on this particular discussion about The First Amendment and Treating Social Media Platforms as Common Carriers.
Obviously, Facebook (etc.), must follow other laws and I don't mean they should have blanket immunity from all laws or the Constitution.
The argument is whether requiring a space owner to let people speak on its space is constitutional. The smaller the space, the better the owner's case that forcing it to open up, may restict its own use of that space. The bigger the space, the worse the case.
If Pruneyard's space was too big to successfully claim that there was a constitutional bar to requiring it to permit unwanted speech in its space, Facebook's virtually infinite space leaves them with no claim at all.
I think you have the wrong focus, Lee. You are focused on how much space Facebook has (i.e., "they have unlimited space, so people should be able to use Facebook's space however they want to"). I think it is better to focus on what space is available to everyone else (Facebook isn't keeping you out of anything other than a community of the willing; people should be able to form communities of the willing). You can build your own space without limiting the space available to anyone else to speak. This is what differentiates PruneYard, cable companies, phone companies, etc. All of those, to one degree or another, involved balancing the right of the company to control a limited space (everyone couldn't lay cable wires into all homes, the mall was a common area that limits what other people can do) whereas Facebook doesn't prevent anyone else from creating exactly the same platform in, essentially, exactly the same physical space.
The only interesting question, it seems to me, is the antitrust issue. And that seems a loser, again, because the barriers to entry are very, very low and there is competition from reddit to Fox News to Breitbart message boards to the Volokh Conspiracy...precisely because internet space is virtually unlimited. You do you. Let Facebook do Facebook.
I'm simply supporting the argument that EV makes, that a law to require Facebook to host comments it doesn't want to host would not be a breach of Facebook's 1A rights; based on the Pruneyard precedent, and on my view that Pruneyard was correctly decided - ie that being required to host speech you don't want to host does not prevent you from either speaking or not speaking. Unless in a particular case it does happen to do so. Hence the relevance of the size of the space.
I acknowledge that as a mere reader of English, and not a duly credentialed and commissioned lawyer, it is impertinent of me to offer an opinion on a legal matter such as this, but I confess that I don't feel very abashed about this.
I'm not making an argument that Facebook should be required by law to host comments it doesn't like. That is an issue for another day. Though supporters of that view will be buoyed by Justice Breyer's support, and by Justice Kagan's superglued commitment to stare decisis.
"But the virtual public square is virtually UNLIMITED."
No, that is absolutely not so. Parler was swept off the internet overnight. It took them a month to rebuild and they will never be as popular as before because their competition doesn't want them to be.
The internet is VERY finite, in fact way too finite now. A mere handful of companies control it.
Look, it's true that the internet is not literally unlimited. But if a mall with an occupancy limit in the thousands is big enough to be a public space, why wouldn't a site capable of hosting tens of millions be one?
So every website is a public space because they are all capable of hosting tens of millions? This can't be right.
I think the space argument cuts the other way. The mall is keeping people out of a physical space.
On the internet, you can go to a space that has rules you like. Or you can create your own space with rules you like. There are, practically speaking, infinite spaces available.
Practically speaking, it is impossible to build your own space on the internet.
The infrastructure required to host even a medium ranked website is insanely expensive for an individual. You can save significantly on costs by not getting protected hosting or support, but then your website might as well not exist because it will spend all its time as either a botnet host or DDoSed.
This is disproven by the existence of Facebook and Twitter and millions of individuals' blogs and websites. I mean, sure, it is insanely expensive for me to set up Facebook with millions of users. But that's not how they started. Someone else somewhere on the internet is setting up a social media platform right now with their own rules to try to compete with Facebook, Twitter, Instagram, TikTok, etc., etc. I mean. It was all Yahoo! and MySpace until Google and Facebook. Have things changed? Sure. But, fundamentally, you can build a better mousetrap and people will beat a path to your internet domain.
If there are aspects over which someone has true monopoly power or if there is illegal coordinated collusion to stop competitors arising, then, sure, regulate that aspect. But regulating the result rather than implementing regulations to lower barriers to invention and innovation seems to put the cart before the horse.
Entities like Facebook that set standards was foreseen by Section 230 and seen as a good thing. We like communities, online or otherwise, that maintain certain levels of decency. We like private entities to set those standards to the greatest extent possible and consistent with core principles of liberty, rather than a government because, once the government sets the standards, the power of the free market is muzzled and innovation is stifled. Government regulation is a last resort.
Here, conservatives are all too eager to use the blunt force of government regulation to solve the "problem" of their over reliance on disinformation ("Stop the Steal", etc.). They have the right to say it, but they don't have the right to force Facebook to distribute or amplify their lies. Similarly, Fox News or Breitbart can stop me posting things they don't like to their respective media platforms. The "problem" of bias in the mainstream news media was not "solved" by government regulation, it was solved by someone (multiple someones, in fact) recognizing a market niche with associated profit opportunity.
Now people that want to see [insert your most hated talking head]'s drivel can tune in. Those that don't can go somewhere else. If [your most hated] causes the platform to gain or keep business, s/he will continue getting space, if s/he starts saying things that costs the platform viewers or money or that are inconsistent with the core values of the owner of the platform, then they won't get the space. And that's as it should be.
Hosting a site on your own computer that would be accessed by a modest number of people is fairly easy. You're not likely to be DDoSed, because that requires people with the necessary resources to actually devote them to attacking you, and they've probably got higher priority targets than messing with random people.
And you're not going to end up as part of a botnet if you set up your permissions right, and your site isn't interactive. You can't be recruited for a botnet by read only access, after all.
Yes, it does get more expensive once you're big enough to be a target.
So I have to let someone stand in MY yard and speak even though I don't agree with what he is saying? Are there any private property rights left anymore?
No.
The cases cited involved publicly-accessible areas (a mall, a university, a cable system), and not a non-publicly-accessible area like a private home.
So the answer to "So I have to let someone stand in MY yard and speak even though I don’t agree with what he is saying?" comes down to: have you opened your yard to the public or some segment of the public for other purposes?
No, because you haven't made your yard a public space.
If you have for some reason, then different rules may apply.
What if I'm having a yard sale? My yard is open to the public for the duration, so am I required to allow speakers?
Maybe. It depends on what other things you are allowing in order to encourage people to come buy your stuff. In PruneYard, the property owners had a history of allowing and even encouraging pretty much anything.
I'm not thrilled the PruneYard made it an exercise in line-drawing rather than an unambiguous right one way or the other - but Prof Volokh and Armchair are accurately describing the current precedent.
Depends on the context.
If you have a history of allowing speakers during your yard sales, then maybe.
If you don't, then you don't.
If Facebook is a common carrier, wouldn't gmail/yahoo mail/hotmail also be considered common carriers? And if so, wouldn't that make spam filters illegal?
I really don't like the idea of that.
Not necessarily. Phone companies are considered common carriers, yet you see them blocking mass robocalls and such.
Clarification: phone companies are required to block robocalls - it's not their choice.
https://www.upi.com/Top_News/US/2021/06/30/robocalls-fcc-deadline/9561625019063/
Currently true but it was their choice long before the passage of that rule.
To Mips' original question, no this would not make spam filters illegal. Spam filtering is a service offered to the reader, not a refusal of service to the sender. Note that in most of the services you describe, your spam mail is still delivered - it's just autofoldered in a "junk" category for you to review or ignore as you see fit.
And it is noteworthy that for a long time, wireless carriers have NOT blocked certain spam calls, precisely because they are concerned about being sued, and despite the fact that their customers have been very critical of it.
I have worked for one of those large companies, and it is amazing how this- which should be a product decision- turned into a meeting with one engineer and 10 lawyers.
OTOH, I sat in the same meeting at an email provider, and it was 10 engineers and never a single thought of a lawyer.
"And if so, wouldn’t that make spam filters illegal?"
Trucking companies do not have to accept rotting fish or explosives.
Plus, isn't it the user who controls the filter?
Well, I know Gmail throws a lot of things into my spam folder without my having set up rules that would apply to them. But, I am able to set up rules, or individually act, to yank things back out again.
Is Gmail actually blocking anything from reaching me at all? Don't know. How would I know?
About 85% of email traffic is spam. There is a collaborative effort on the part of most email providers to nuke this stuff before it even gets to your inbox. So, yes. There's a lot of crap that never gets to you; the ones that make it to your spam filter are the ones that gmail thinks may possibly not be spam.
Without this filtering, email would have become useless a long time ago.
This is not a right or left thing. Without some sort of moderation, the open nature of the internet makes any social media platform useless due to the spammers and trolls. The libertarian in me says that the free market will do a better job of the difficult task of moderation than the government.
"There is a collaborative effort on the part of most email providers to nuke this stuff before it even gets to your inbox."
That actually makes me somewhat nervous. I don't mind Gmail placing emails directly into my spam folder, and deleting the contents after a month, because that means it's at least possible for me to dispute the decision.
But given that most tech companies have, shall we say, radically different politics from mine, I could easily see this collaborative effort being warped to political ends. For all I know it already is being so warped.
The technology behind this is actually quite robust. Despite their...distasteful...ban-hammering in recent years, companies like Yahoo and Google were very, very sensitive about getting a reputation for failing to deliver email. It was a rational product decision to ensure that spam filters did not get "wanted" email.
A lot of this is not qualitative filtering, but very quantifiable. One of the biggest spammer tactics is forging the identity of the spammer. And so TONS of spam is blocked because the sender cannot cryptographically prove his identity. This has made life difficult for marketing companies that send email on behalf of a client to their customers, and for small home-run list-serves that redirect posts to email.
One man's spam is another man's solution to his tiny penis.
Imagine a long post here explaining that my company's email servers do not block SPAM but leave it to the individual users to decide what email they want to receive and provides them with the tools to filter to their standards.
When I went to post it Reason told me I needed to be logged in but when I tried to log in Reason told me I was already logged in.
You are probably confusing that with the tons of spam that get blocked along the way
Unfortunately the spam filters also discard a fair amount of legitimate business email. From the experience of my publisher I'd estimate that at least 15 - 20% of emails get vectored into spam folders.
Around 1996-1997 judges let email providers block spam. "Cyber Promotions" was the name of the spammer party to the lawsuits. This was fairly early in the history of spam. The "Green Card" spam on usenet was in 1994. I consider that the origin of spam, though not the first unwanted commercial message sent online.
I can confirm. Usenet is where the Monty Python skit was used to describe the habit of posting and cross-posting off-topic messages, often repeatedly. It did not have to be "commercial" to be considered spam. Spamming a newsgroup was considered rude. It wasn't until later, once lawyers and grifters got involved, that the meaning shifted more towards "unwanted commercial email."
Usenet still exists and should be a cautionary tale for anyone who favors "must carry" rules for many-to-many online interactions. Many (most? almost all?) newsgroups devolved into flame wars and trolling since there was nobody in charge to implement moderation.
Treating social media platforms as common carriers may well be constitutional. But it's most likely really really bad public policy.
For one, the reason to treat a service as a common carrier is usually that the service is a monopoly, or a near monopoly. i.e. the cable company, the electric company, the gas company, the phone company - it makes little sense to have dozens of gas lines, phone lines, etc. going to everyone's house. But no social media platform are a monopoly by any stretch of the imagination - there are literally thousands of internet disccuion fora out there, including this one. Get thrown off of facebook? Thousands of other places to spew.
The second reason is more practical: no social media platform can last long without some form or moderation. Trolls spammers and shitposters will overrun any site that is not moderated somehow. This is not a right or left thing, it's just a fact of the anonymous open internet. So a question for the libertarians: who will do a better job at the difficult task of moderation: the free market or the government?
And the third obvious reason is that if you take sec 230 protection away, most discussion fora and comments sections will simply fold because they can't accept the legal liability with the result of **much less** free and open dialog.
Careful what you wish for, and please try not to let your tribal identity and emotions dictate your stance on this issue.
"tribal identity "
Facebook and twitter use their tribal identity to reduce conservative views. They made their choice.
Assuming they do (a contention belied by the constant presence of conservative-leaning memes at the top of facebook's most-shared lists) ...
So what? You're just re-raising lame tribal grievances as a rejoinder to constitutional and public policy arguments.
"a contention belied by the constant presence of conservative-leaning memes at the top of facebook’s most-shared lists"
Look, that's just evidence of the fact that, if conservatism wasn't being artificially suppressed, it would be totally dominant on these platforms, and naturally so, because America is a conservative nation.
Just like conservatism ended up dominating AM talk radio. It wasn't because liberals were banned from AM radio, they weren't. It happened because liberals weren't in a position to ban conservatives from it.
The social media platforms, too, were dominantly conservative until the left got into a position to start banning what they didn't like. If the left loses their ability to censor major internet platforms, the internet will return to being dominated by conservatives.
". . . because America is a conservative nation."
Going back to 2000, every Dem pres. candidate won the popular vote except for Bush in 2004.
Also, you guys whine about how these liberal groups have large influences:
Press
Media
Education
Deep Govt.
RINOs
Unions
LGBTQ
Hollywood
And yet, here you are saying we're a conservative nation.
Can you guys get together and get your stories aligned?
Seriously, apedad. Brett's comment practically refutes itself. But anything to be the victim of some giant conspiracy. Why let a few facts and a little logic get in the way?
That is some classic Brett reasoning. I mean, you don't think conservative dominance of AM talk radio had to do something with the different demographics of liberals and conservatives? I guess liberals dominate other areas of entertainment and journalism because America is a liberal nation?
Conservatives dominated AM radio because (1) radios -- in cars, transistors -- were cheap, downscale devices; (2) conservatives could spend their afternoon hours listening to radios; and (3) conservatives were looking for content because newspapers, television news, movies, comedy and other forms of information-based or comedy-drama-history entertainment were generally produced by and for an educated, reasoning, modern audience.
I'm sure the classic Rev reasoning over here makes way more sense to princess Analthea...
My tribal grievances are not lame. Yours may be.
EV has advanced the constitutional argument. No need for me to badly rehash.
I don't care about the policy arguments. They are besides the point IMHO, this is strictly a political issue.
My point is taking sides gets a response from the aggrieved side.
"I don’t care about the policy arguments. They are besides the point IMHO, this is strictly a political issue. "
Bob's life mantra.
There aren't thousands of other social media properties with the reach of FB or Twitter. There aren't even a dozen.
That aside, monopoly isn't a required characteristic. If it's a place of public accommodation, that's enough.
What the media companies want to do is entangle their own opinion with the opinions of their customers, and where there is disagreement, forbid their customers 6 voices. That is where the problem lies.
There aren't thousands of other newspapers with the reach of the NYTimes or the Washington Post. There aren't even a dozen.
How does your argument apply to "more traditional" media that are quite clearly within the ambit of the 1st Amd.?
Newspapers aren't places of public accommodation. That point was covered on the article.
Physical newspapers are covered in the article. Digital fronts for newspapers should fall afoul of the same reasoning as Facebook. Virtually unlimited public accommodation.
And the third obvious reason is that if you take sec 230 protection away, most discussion fora and comments sections will simply fold because they can’t accept the legal liability with the result of **much less** free and open dialog.
Clem, I liked your comment, but at least tentatively disagree with that. It is far from clear that economies afforded by internet publishing do not permit sufficient budget to edit comments. The New York Times edits literally thousands of comments a day—because they choose to, not because it is required—presumably at trivial cost compared with their revenues.
Generally, editing to avoid liability can be made far simpler and quicker than many folks who offer warnings similar to yours seem to suppose. For one thing, opinions get off free; you can't be accused of libel for publishing an opinion. What is about 99% of internet commentary? Easily recognizable opinion.
No legal expertise required to read and okay that stuff. Not much time either. One editor could probably do at least a couple thousand a day. Think what that means in terms of staffing requirements for a blog such as this one. If it is profitable, probably not much.
Factual assertions are another matter, but most of those don't implicate libel either. Somebody says the world is round, someone else says, well, technically it isn't quite round—that isn't a close call on liability. It isn't any call at all on liability.
So in the end, only a tiny fraction of factual assertions raise any potential for liability. What is the best way to deal with those? Depends on the publisher. Many will simply say put those aside, for the sake of safety.
Note that at no point so far has anyone required the services of a lawyer. And just following those policies would support almost every internet comments section with essentially no discernible differences in what actually gets published. It's almost all opinion everywhere.
To the extent that such rules of editing might prove impractical—maybe in the cases of the largest internet giants—so what? Almost every reasonable objection to their practices—and there are plenty of those—would be ameliorated if they were forced to shrink to stay in business. The nation could do far worse than to adopt a policy which fostered much greater diversity and profusion among private internet publishers. In fact, the nation is doing far worse now, with its support of internet giantism among publishers.
I think this is spot on. If these companies restricted themselves to policing content that was potentially illegal, they would have a fast easier time of it. For one, the definition of illegal content are largely objective.
But this is not what these companies are doing. They are policing based on their own opinions of what they want their customers to talk about. They are in the suppression business. They're editorializing, not just policing.
That's what it's so durn expensive.
"The nation could do far worse than to adopt a policy which fostered much greater diversity and profusion among private internet publishers. In fact, the nation is doing far worse now, with its support of internet giantism among publishers."
Stephan, This time I have to agree with you.
Youtube gets over 700,000 hours of new content every day. Editing that (and that doesn't include the text and comments) would require 700,000 hours of editor eyeballs per day.
There are 1.8 Billion active users on Facebook per day.
UPS, FEDEX?
You could set up your own delivery company if you can raise enough capital, or reintroduce DHL service to the US.
I now se a lot of Amazon trucks around.
Yup some things are easier than others. FB made its initial money and still makes a lot by selling subscriber personal information. That is not just simple advertising revenue.
I find most of the defenders of FB disingenuous at best. And I see little acknowledgement about how much they yielded to ideological and political pressure from elected politicians.
Right. And for that reason it would make much more sense for ISPs rather than ICSs to be deemed common carriers, but conservatives (and libertarians) vehemently attacked that policy. (It was called "net neutrality.") So now we learn that the only reason conservatives opposed it was because liberals supported it.
"he only reason conservatives opposed it was because liberals supported it."
Another bogus analysis because it assumes that bandwidth is free and that the concept that you pay for what you use is somehow unfair.
Once 5-G predominates and is part of every tablet, computer and iPhone you see that the increase in traffic will inevitable drive the cost of service up for all.
For all the blah, blah, I have yet to see a compelling experimental analysis of how common carrier status has damaged phone service, delivery services. etc. The claims all seem to be spun out of people's heads.
Net neutrality did not challenge the concept that you pay for what you use. It stood for the proposition that everyone would pay at the same rate, not the same amount.
Prof Volokh, you don't discuss Cedar Point Nursery. (Or if you do, I apologize because I missed it.) I assume that you would distinguish it because unlike PruneYard, the employer did not open the property to the public. But is it really that simple? Would a paragraph (or even a footnote) making the distinction explicit be helpful to your readers?
Rossami: I tried to briefly distinguish Cedar Point, precisely on that ground, in footnote 119, quoting the very distinction you point out: "Limitations on how a business generally open to the public may treat individuals on the premises are readily distinguishable from regulations granting a right to invade property closed to the public." Is there something missing in that?
Ah, thanks. That's perhaps more succinct than I was expecting but nothing is missing from that footnote (except maybe the blinking lights and neon bolding to help me see it).
Cedar Point Nursery is a Takings Clause case about a physical occupation of, as you say, property that is not open to the public. Do you think that Facebook is somehow "occupied" by allowing comments to stay up rather than deleting them?
Cedar Point Nursery was decided as a Takings Clause case but all that was taken was the right to exclude someone from their property. To the extent that you consider Facebook a private operation that only allows invited guests (it's a stretch but you do generally have to log in to see most content so maybe), yeah, they are "occupied" if they are compelled to allow visitors and speech that they would otherwise prohibit.
To use Facebook you have to agree to their terms of service...
Can anyone please show me in this giant PruneYard compendium any place where Professor Volokh acknowledges any salience at all for press freedom as a separately enumerated constitutional right? He seems to want to treat the whole internet publishing phenomenon as a mere appendage of rules for expressive freedom in forums made of real estate.
I see that as in keeping with EV's previous tendency to treat press freedom as a mere instance of speech freedom, without giving it much credit as an independent right subject to its own rules of enforcement. This makes it look like under pressure of right wing complaints about internet platforms, EV is ready to write press freedom out of the 1A altogether.
It's not that press freedom is a mere instance of speech freedom.
It's that press freedom is the freedom to use 'printing presses', not the freedom of some institutional 'press'. It's the same sort of right as speech freedom, only the modality changes.
The outlets that took to calling themselves 'the press' after the the 1st amendment have no special constitutional significance, they're just making more extensive use of a right shared by everybody.
"It’s that press freedom is the freedom to use ‘printing presses’, not the freedom of some institutional ‘press’. It’s the same sort of right as speech freedom, only the modality changes."
That's not at all what it means. The Constitution is not limited by technology changes, nor does it enshrine specific technologies i its limits on government.
The "press" as it meant at that time was the act of gathering, editing and publishing information of any nature...
Brett, you got that line of argument from Professor Volokh, when he was disparaging press freedom. It is historically mistaken, and it isn't a close call.
[Citation needed.]
For all I know, he got it from me, I'd have told you that before I'd ever heard of the Volokh conspiracy. But that's been the dominant understanding of the 1st amendment for something like 230 years now.
It's understandable that journalists would want to be "the press" of the 1st amendment. But they aren't, and never were, The printing press was "the press".
Clingers don't like mainstream institutions. Newspapers, universities, broadcasters, movie studios, internet platforms. They try to build separatist organizations, which usually fail (the right-wing AARP, the right-wing teachers organization, the right-wing ACLU, etc.), although narrowcasting (television, AM radio) has been profitable and the judge- and clerk-grooming operation (Federalist Society) has been productive. So clingers generally criticize and try to damage mainstream institutions.
Mainstream response: What did you guys say? We're busy winning the culture war, shaping progress against your wishes, and maintaining the institutions that are most influential in modern America.
Busy winning the culture war, such as in areas like gun control, border policy, recent supreme court motions on social media as common carriers... Unfortunately you will avoid reason when it hits, because you will be too busy whining offline.
Also busy whining in the culture war like the woketards recently, who are starting to fall out of favor even at Universities.
I think your definition of busy winning the culture war looks like the chinese government in Hong Kong. The US won't have that for long though.
I’ve never had to agree to terms of service before entering a mall. I’m still wondering what the TOS for Facebook et al say on these points.
But you are also subject to that mall's rules of conduct, written or otherwise and subject to ejection for violating them...
Darwinnie, sure you did. No shirt, no shoes, no service.
I agreed to a modicum of modesty? Probably so. We need something similar for internet bulletin boards.
Terms of service are posted at the entrances, or in the common areas, of many malls.
Has anyone ever had to check a box that they’ve read and accepted them before walking in? Maybe at a ticketed venue like a museum. But a mall? Someday I’ll look for these TOS; I’ve never seen any. But I’ll be surprised if they don’t say something like I agree not to harass other visitors.
It's typically posted by the door, you consent by going through, a sort of "shrinkwrap" consent.
Lots of highly educated folks twisting the plain meaning of the 1st Amendment to allow governmnet to do what it is not allowed to do: force the owners of private property to follow government diktat.
If I allow anyone on my property, it is only if they follow my rules of use of that property. If they fail to follow those rules of use they are forbidden to use my private property.
This violates no one's free speech rights, rights to use their own property to speak or to speak on public property.
The SCOTUS jurisprudence that deems private property public property is an attack on property rights, anti-capitalist and illiberal. A truly originalist SCOTUS would strike down every "public accomodations" section of every relevant piece of federal legislation as a Takings violation...
I know we're talking about the legality rather than desirability, here, but the outcome of Turner Broadcasting System v. FCC was not desirable.
The concern at the time was that liberal cable operators might decide to keep a more conservative start up off of their cable systems (Fox Broadcasting was relatively new at the time) and Turner was a liberal. Since there was no way around the local cable monopoly, it was thought this might skew public opinion more to the left, and so the Republicans in Congress wanted to force your local cable company to carry broadcast stations like Fox.
Fast forward to 2021, and the internet and the streaming revolution means that your local cable company isn't a monopoly anymore. In fact, the cable industry is being beaten badly in the market by streaming, and one of the big reasons may be because cable operators are forced by this case to carry local broadcast networks. They can't compete with streaming services that aren't required to carry those local broadcast signals by law and charge those carriage fees. Services like Sling and Philo will give you all the cable stations you like for $20 - $30 a month without local broadcast networks.
Before the streaming revolution, local broadcasters were raking in those lucrative, legally mandated carrier fees, and the broadcast networks bought out their local affiliates in markets with high cable subscription rates for that reason. Now Disney, CBS, NBCUniversal, and Fox have banded together to sue Locast, an app that streams your local broadcast stations to you for free. The local broadcasters are actually required to broadcast their signals for free, anyway, as a condition of their license--in exchange for not having to pay for spectrum. But if you're not on cable, you're not paying those fees.
As that case against Locast makes its way to the Supreme Court, we may see Turner Broadcasting System v. FCC revisited in various ways. For one thing, now that your local cable company is no longer a monopoly, the common carrier obligations that arose from a situation that no longer exists may no longer apply. The other open question is about whether what amounts to common carrier--in this situation--is really to the consumers' benefit or protects them from monopoly rents.
Locast alleges that local broadcasters have replaced their old equipment with newer equipment that broadcasts a weaker signal. They allegedly do this so that fewer people can get their broadcast signals for free--and watch their local football team for free. More people being relegated to cable, where they're forced to pay those lucrative carrier fees, means more carrier fees for the broadcast networks at the expense of consumers.
If the broadcasters are using what is supposed to be a remedy for a local monopoly situation to force consumers to pay for things they wouldn't be paying for otherwise, based on Turner Broadcasting System v. FCC, then you might not want to bet on Turner Broadcasting System v. FCC surviving this process. Cable companies should not be required to carry those broadcast signals any longer.
I'm on board with the idea that common carrier obligations arise from monopolistic situations, but cable companies aren't a monopoly anymore, and the remedy--in this situation--appears to be worse than the disease. Will the same sorts of problems play out over time if common carrier were applied to posts on social media? I don't know for sure.
I do suspect, however, that if we force social media companies to carry speech that advertisers find objectionable, advertising driven, quality social media--for free--may not have much of a future. Facebook, Twitter, and YouTube are primarily advertising delivery platforms, and if advertisers aren't willing to pay to have their advertising appear near certain speech--and the social media companies are required by law to tolerate that speech--then plenty of advertisers will not be wiling to advertise on social media anymore.
https://www.nytimes.com/2020/08/01/business/media/facebook-boycott.html
Ken Shultz, thanks for that excellent explanation of a complicated situation. In the process you illustrate why understanding what legal rules say is not sufficient to assure you know what they mean. You also have to understand how they affect the activities they purport to govern.
Technology is an innovative space were monopolies are more likely challenged by competitor product categories, rather than competitor product companies. For example, the desktop operating system has lost ground to the web server operating system and mobile operating system. The search engine, as a product organizing the worlds information, lost ground to Wikipedia.
Therefore, the technology platforms can be seen to be evolving not as a problematic singular pyramid, but as a healthy multi-rooted forest.
If someone wants to host a site exclusively devoted to cat videos, , should they be forced to host dog videos? I say no. But that's not a social media platform like Reddit/Facebook/Twitter/etc.
If Reddit wants to run a platform where a user can create a section (subreddit) to host practically any topic that user wants---such as cat videos---and limit THAT subsection to cat videos with Reddit's consent (and assistance in enforcing that limitation), then Reddit should be required to let a user create a subreddit for anything else that's legal. The "common carrier" rule is close. I really don't think anyone believes the posts of users are the posts of Reddit, or that Reddit itself supports everything that any user posts, and Pruneyard is most applicable.
But the first step would be to require platforms to publish all their content rules in plain, unambiguous language, and if they want to punish a user the platform must inform posters of exactly which specific rule they violated, with a way to appeal to a neutral 3rd party. Zuckerberg he can't define "hate speech" so he shouldn't be able to ban "hate speech" since he can't define it. Potter Stuart not withstanding, you can't base rules on individual person's or groups interpretation of ambiguous language.
Ideally, I'd like to see section 230-type unlimited liability shield only apply to platforms that adhere to a full First Amendment basis---meaning their rules are coextensive with the First Amendment.
People in general need to learn that it's OK to occasionally be exposed to something that offends you on the Internet... really, it is.
You absolutely can. That's the same rule that applies to all private property. It's the basis for my rules for houseguests: act as I want you to, or I'll make you leave. It's the basis for employment: please your boss, or get fired.
Sigh. How many times are people going to say this nonsensical thing? "I'd like to see Section 230 immunity only applied to things that are already immune."
If they have rules "coextensive with the First Amendment" — if by that you mean they only moderate content that is illegal — then they wouldn't be liable even without 230.
People in general need to learn that it's okay for private property owners to control their own property, no matter how much other people want to use that property.