The Volokh Conspiracy
Mostly law professors | Sometimes contrarian | Often libertarian | Always independent
Interpreting 47 U.S.C. § 230(c)(2)
The statute immunizes computer services for "action voluntarily taken in good faith to restrict ... availability of material that the provider ... considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable, whether or not such material is constitutionally protected"—but what exactly does that mean?
In a few weeks, the Journal of Free Speech Law will publish its inaugural symposium, on free speech and social media platforms. Prof. Adam Candeub (Michigan State) and I will be writing our own articles for the symposium, as will several other scholars. But Adam and I will also have a joint piece on one specific question—how 47 U.S.C. § 230(c)(2) should be interpreted. Here's a very rough draft (not yet cite-checked and proofread), which you can also read in PDF; I'd love to hear people's view on it.
I should note that my initial reading of the statute (which I had expressed at some conferences, though not in any articles) was different than it is now; it roughly matched what we discuss below in Part I, which I now think isn't correct. Many thanks to Adam to talking me around on the subject.
[* * *]
Title 47 U.S.C. § 230(c)(2) states:
No provider or user of an interactive computer service shall be held liable on account of … any action voluntarily taken in good faith to restrict access to or availability of material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable, whether or not such material is constitutionally protected.
Say that a state law mandates that platforms not discriminate among their users' content based on viewpoint. Set aside for now whether such a mandate is good policy, whether it's consistent with the First Amendment, and whether it's consistent with the Dormant Commerce Clause.[1] Is such a state law preempted by § 230(c)(2)?
We think the answer is "no." Section 230(c)(2) was enacted as sec. 509 of the Communications Decency Act of 1996 (CDA),[2] and all the terms before "otherwise objectionable"—"obscene, lewd, lascivious, filthy, excessively violent, harassing"—refer to speech that had been regulated by the rest of the CDA, and indeed that had historically been seen by Congress as particularly regulable when distributed via electronic communications. Applying the ejusdem generis canon, "otherwise objectionable" should be read as limited to material that is likewise covered by the CDA.
Restrictions on, for instance, speech that entices children into sexual conduct (discussed in sec. 508 of the CDA) could be seen as restrictions on "otherwise objectionable" speech. The same may be true of restrictions on anonymous speech said with intent to threaten (discussed in sec. 502 of the CDA). But restrictions on speech that is outside the CDA's scope should not be seen as immunized by § 230(c)(2). That is particularly true of restrictions on speech on "the basis of its political or religious content"—restrictions expressly eschewed by sec. 551 of the CDA, which distinguished them from regulations of "sexual, violent, or other indecent material."
Naturally, nothing in our reading would itself prohibit platforms from blocking material. By default, they are free to do so, absent some affirmative hosting obligation that some state or federal law imposes on them; § 230(c)(2) is not such an affirmative obligation. But if a state wants to ban viewpoint discrimination by platforms, § 230(c)(2) does not preempt that choice.
[I.] One Interpretation: "Otherwise Objectionable" as a Catch-All
To begin with, we want to acknowledge the alternative interpretation: that "otherwise objectionable" is basically a catch-all phrase that should be read broadly and "in the abstract,"[3] referring to anything that the platform sincerely objects to.
Deleting posts that the service views as hateful, false, or dangerous—or banning users who put up such posts—would then be immunized by § 230(c)(2): The service would be "in good faith" "restrict[ing] access to … material that" it "considers to be … otherwise objectionable." Ideologically objectionable speech, the argument goes, remains "objectionable"; and the question is whether the provider "considers [it] to be" objectionable, not whether it's objectionable in some objective sense.[4]
Perhaps deleting a post just because it comes from a competitor, and using insincere claims of ideological objection to cover for such anticompetitive behavior, might be "bad faith."[5] Similarly, perhaps a pattern of dishonest explanation of the basis for removal—for instance, referring to facially neutral terms of service while covertly applying them in a viewpoint-discriminatory way—might be inconsistent with "good faith," which is often defined as requiring an honest explanation of one's position.[6] But there is no absence of "good faith," the argument would go, in sincerely objecting to particular ideas.
[II.] Ejusdem Generis
[A.] "Similar in Nature"
We think, though, that the better approach is to apply the ejusdem generis interpretive canon:
Where general words follow specific words in a statutory enumeration, the general words are construed to embrace only objects similar in nature to those objects enumerated by the preceding specific words.[7]
This is a commonly applied rule. Consider, for instance, the Federal Arbitration Act's exemption for "contracts of employment of seamen, railroad employees, or any other class of workers engaged in foreign or interstate commerce." That could be read, if one is interpreting the words "any other" in the abstract, as covering "all [employment] contracts within the Congress' commerce power,"[8] or at least any workers engaged more directly in foreign or interstate commerce, such as workers at hotels, people who do telephone sales, and the like. But the Court instead applied ejusdem generis to read "any other class of workers" as covering only employment contracts of transportation workers, by analogy to the preceding terms ("seamen" and "railroad employees"):
The wording of [the statute] calls for the application of the maxim ejusdem generis …. Under this rule of construction the residual clause should be read to give effect to the terms "seamen" and "railroad employees," and should itself be controlled and defined by reference to the enumerated categories of workers which are recited just before it; the interpretation of the clause pressed by respondent [as a catch-all covering all employees engaged in interstate or foreign commerce writ large] fails to produce these results.[9]
Likewise, consider Washington State Dep't of Soc. & Health Servs. v. Guardianship Estate of Keffeler, which interpreted a statute protecting Social Security benefits from "execution, levy, attachment, garnishment, or other legal process."[10] The Court reasoned,
[T]he case boils down to whether the department's manner of gaining control of the federal funds involves "other legal process," as the statute uses that term. That restriction to the statutory usage of "other legal process" is important here, for in the abstract the department does use legal process as the avenue to reimbursement: by a federal legal process the Commissioner appoints the department a representative payee, and by a state legal process the department makes claims against the accounts kept by the state treasurer.
The statute, however, uses the term "other legal process" far more restrictively, for under the established interpretative canons of noscitur a sociis and ejusdem generis, "'[w]here general words follow specific words in a statutory enumeration, the general words are construed to embrace only objects similar in nature to those objects enumerated by the preceding specific words.'" Thus, "other legal process" should be understood to be process much like the processes of execution, levy, attachment, and garnishment, and at a minimum, would seem to require utilization of some judicial or quasi-judicial mechanism, though not necessarily an elaborate one, by which control over property passes from one person to another in order to discharge or secure discharge of an allegedly existing or anticipated liability.[11]
"[O]therwise objectionable" in § 230(c)(2), then, should not be read "in the abstract" as simply referring to anything that an entity views as in some way objectionable. Rather, it should be read as objectionable in ways "similar in nature" to the ways that the preceding terms are objectionable.[12]
[B.] The Common Link Between the § 230(c)(2) Terms
And the "nature" of the terms is revealed by the nature of the Act that included them. The provision codified at 47 U.S.C. § 230 wasn't a standalone statute: It was section 509 of the Communications Decency Act, the Act that in turn formed Title V of Telecommunications Act of 1996.[13] True to its name, the Telecommunications Act dealt with a wide range of telecommunications technology, mostly the familiar media of telephone communications, broadcast television, and cable television, but also the then-new medium of Internet technology. The Communications Decency Act likewise dealt with the same range of telecommunications media. And the table of contents of the CDA[14] is particularly telling:
TITLE V—OBSCENITY AND VIOLENCE
Subtitle A—Obscene, Harassing, and Wrongful Utilization of Telecommunications Facilities
Sec. 501. Short title.
Sec. 502. Obscene or harassing use of telecommunications facilities under the Communications Act of 1934 [the text of this also covered "obscene, lewd, lascivious, [and] filthy" speech].
Sec. 503. Obscene programming on cable television.
Sec. 504. Scrambling of cable channels for nonsubscribers.
Sec. 505. Scrambling of sexually explicit adult video service programming.
Sec. 506. Cable operator refusal to carry certain programs [containing obscenity, indecency, or nudity].
Sec. 507. Clarification of current laws regarding communication of obscene materials through the use of computers.
Sec. 508. Coercion and enticement of minors.
Sec. 509. Online family empowerment [this became § 230].Subtitle B—Violence
Sec. 551. Parental choice in television programming [mostly focused on "violent" and sexually themed programming].
Sec. 552. Technology fund [focused on empowering parents to block programming].[15]
(Two parts of one section, sec. 502, were struck down in Reno v. ACLU (1997),[16] but they too reflect what Congress in 1996 viewed as objectionable, and tried to regulate, even if unsuccessfully.) The similarity among "obscene, lewd, lascivious, filthy, excessively violent, [and] harassing" thus becomes clear: All refer to speech regulated in the very same Title of the Act, because they all had historically been seen by Congress as regulable when distributed via electronic communications.
Nor did the terms appear in the CDA by happenstance; rather, they all referred to material that had long been seen by Congress as of 1996 as objectionable and regulable within telecommunications media (even if some of them were "constitutionally protected" in the abstract, outside the telecommunications context):
[1.] "Obscene, lewd, lascivious, and filthy" speech had been regulated on cable television and in telephone calls.[17]
[2.] "Harassing" material telephone calls had also long been seen by Congress as regulable.[18]
[3.] The reference to "excessively violent" speech was part of a longer tradition of yoking "[e]xcessively violent" and indecent or "obscene" speech in discussions of regulating over-the-air broadcasting. This tradition goes back at least to the FCC's 1975 Report on the Broadcast of Violent, Indecent, and Obscene Material.[19] It endured at least until 2007, when the FCC concluded that, though "violent content is a protected form of speech under the First Amendment," "the government interests at stake, such as protecting children from excessively violent television programming, are similar to those which have been found to justify other content-based regulations."[20]
Likewise, the Television Program Improvement Act of 1990 exempts from antitrust laws any discussions or agreements related to "voluntary guidelines designed to alleviate the negative impact of violence in telecast material."[21] Throughout the 1990s, other bills in Congress singled out violent material, for instance by establishing a "Television Violence Report Card."[22] Such restrictions were ultimately rejected by Brown v. Entertainment. Merchants Ass'n (2011),[23] but that case was still 15 years in the future when § 230 was enacted.
[C.] "Political or Religious Content"
Section 230(c)(2) is thus best read as immunizing Internet companies' private enforcement of rules analogous to restrictions on "obscene, lewd, lascivious, filthy, excessively violent, [or] harassing" communications—not to enforcement of completely different restrictions that the companies might make up. Using this understanding, "otherwise objectionable" might thus cover other materials discussed elsewhere in the CDA, for instance anonymous threats (sec. 502), unwanted repeated communications (sec. 502), nonlewd nudity (sec. 506), or speech aimed at "persuad[ing], induc[ing], entic[ing], or coerc[ing]" minors into criminal sexual acts (sec. 508).
But "otherwise objectionable" would not cover speech that is objectionable based on its political content, which Congress did not view in 1996 as more subject to telecommunications regulation, and didn't try to regulate elsewhere in the CDA. And this fits the logic of the rest of § 230. The subsection, which was titled "Online Family Empowerment" within the Act, is focused on increasing user control and encouraging providers to create environments free from overly sexual, violent, or harassing material. The policy findings in § 230(b) expressly mentioned
- user self-help technologies that would "maximize user control" over what they receive,
- "blocking and filtering technologies that empower parents to restrict their children's access to objectionable or inappropriate online material" (which fits the statutory subsection title, "online family empowerment"), and
- "vigorous enforcement of Federal criminal laws to deter and punish trafficking in obscenity, stalking, and harassment by means of computer."
Those findings didn't discuss encouraging broader blocking (by online providers, as opposed to by users) of offensive or dangerous political ideas.
Indeed, the one reference in § 230 outside § 230(c)(2) to "objectionable" came in the policy recital supporting parental control via "blocking and filtering technologies"—and the other provision of the CDA that facilitated parental control via blocking and filtering technologies was the provision for violence and sex ratings of television programs (sec. 551), which expressly rejected attempts to restrict "objectionable" political speech. Sec. 551 said that the FCC should "[p]rescribe" guidelines and recommended procedures for the identification and rating of video programming that contains sexual, violent, or other indecent material about which parents should be informed before it is displayed to children: Provided, That nothing in this paragraph shall be construed to authorize any rating of video programming on the basis of its political or religious content …. [Emphasis added.]
And this in turn fits the Supreme Court's approach to regulation of broadcast communications: Consider FCC v. Pacifica Foundation, where Justice Stevens' lead opinion approved of the regulation of "indecent" speech, but only because such a regulation wasn't seen as targeting political content:
[I]f it is the speaker's opinion that gives offense, that consequence is a reason for according it constitutional protection. For it is a central tenet of the First Amendment that the government must remain neutral in the marketplace of ideas. If there were any reason to believe that the Commission's characterization of the Carlin monologue as offensive could be traced to its political content—or even to the fact that it satirized contemporary attitudes about four-letter words—First Amendment protection might be required.[24]
It's thus unsurprising that, when Congress gave specific examples in § 230(c)(2) of "objectionable" material that platforms could block with immunity, it offered examples of material that was objectionable for reasons unrelated to "political … content." And § 230(a)(3)'s extolling the Internet as "offer[ing] a forum for a true diversity of political discourse" is consistent with sec. 551's distinction between filtering of "sexual" or "violent" material (which Congress sought to encourage) and filtering of "political or religious content" (which Congress expressly renounced an intent to encourage).
The record of the Congressional hearings on § 230 supports this reading. Congress passed Section 230 to overrule a New York state case, Stratton Oakmont v. Prodigy,[25] which held that an online forum's decision to engaged in content moderation (aimed at providing a "family-oriented" environment) made the forum liable as a publisher for defamatory material posted by users. That holding naturally strongly deterred any such content moderation.
In the brief legislative history, every legislator who spoke substantively about § 230 focused on freeing platforms to block material that was seen as not "family-friendly." For instance, Representative Cox, one of the bill's sponsors, explained that section 230 would give parents the ability to shield their children from "offensive material … that our children ought not to see…. I want to make sure that my children have access to this future and that I do not have to worry about what they might running into online. I would like to keep that out of my house and off of my computer. How should we do this?"[26] "We want to encourage [internet services] … to do everything possible for us, the customer, to help us control, at the portals of our computer, at the front door of our house, what comes in and what our children see."[27] Other legislators took the same view.[28]
[D.] Avoiding "Misleading Surplusage"
This ejusdem-generis-based reading of § 230(c)(2) also explains why Congress listed a specific set of blocking decisions for which it provided immunity, rather than just categorically immunizing all "action voluntarily taken in good faith to restrict access to or availability of material that the provider or user considers to be objectionable, whether or not such material is constitutionally protected."
Justice Ginsburg's plurality opinion in Yates v. United States offers a helpful analogy here. The Court in Yates was interpreting a ban on (among other things) altering, destroying, concealing, or covering up "any record, document, or tangible object with the intent to impede, obstruct, or influence" a federal investigation. The question was whether a fisherman's throwing overboard an illegally caught fish—in an attempt to keep inspectors from seeing it—qualified. Read in the abstract, the statute should have covered this: a fish is about as tangible an object as you can get. But the Court disagreed:
In Begay v. United States, 553 U.S. 137, 142-143 (2008), for example, we relied on this principle to determine what crimes were covered by the statutory phrase "any crime … that … is burglary, arson, or extortion, involves use of explosives, or otherwise involves conduct that presents a serious potential risk of physical injury to another," The enumeration of specific crimes, we explained, indicates that the "otherwise involves" provision covers "only similar crimes, rather than every crime that 'presents a serious potential risk of physical injury to another.'" Had Congress intended the latter "all encompassing" meaning, we observed, "it is hard to see why it would have needed to include the examples at all." See also CSX Transp., Inc. v. Alabama Dept. of Revenue, 562 U.S. 277, 295 (2011) ("We typically use ejusdem generis to ensure that a general word will not render specific words meaningless.").
Just so here. Had Congress intended "tangible object" in § 1519 to be interpreted so generically as to capture physical objects as dissimilar as documents and fish, Congress would have had no reason to refer specifically to "record" or "document." The Government's unbounded reading of "tangible object" would render those words misleading surplusage.[29]
Likewise, reading "otherwise objectionable" "generically," as covering anything to which someone objects, would "render [obscene, lewd, lascivious, filthy, excessively violent, and harassing] misleading surplusage."
[E.] Comparing § 230(c)(2) with § 230(c)(1)
And the ejusdem-generis-based reading fits the difference between § 230(c)(1) and § 230(c)(2):
(1) Treatment of publisher or speaker
No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.(2) Civil liability
No provider or user of an interactive computer service shall be held liable on account of—
(A) any action voluntarily taken in good faith to restrict access to or availability of material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable, whether or not such material is constitutionally protected; or
(B) any action taken to enable or make available to information content providers or others the technical means to restrict access to material described in paragraph (1).
Section 230(c)(1) is broad, and lacks any enumeration of specific kinds of speech or specific torts. It doesn't, for instance, say that no provider shall be treated as publisher or speaker "for purposes of libel law, invasion of privacy law, negligence law, or other causes of action." But § 230(c)(2) deliberately enumerated a list, suggesting that Congress understood it as immunizing only certain kinds of platform-imposed speech restrictions.[30]
The Ninth Circuit expressed doubt about the application of ejusdem generis to § 230(c)(2), on the theory that "the specific categories listed in § 230(c)(2) vary greatly: Material that is lewd or lascivious is not necessarily similar to material that is violent, or material that is harassing. If the enumerated categories are not similar, they provide little or no assistance in interpreting the more general category."[31]
But we think the court missed the link we describe above: violent, harassing, and lewd material is indeed similar, in that it had long been seen—including in the rest of the Communications Decency Act, in which § 230(c)(2) was located—as regulable when said through telecommunications technologies. The court was correct in concluding that "decisions recognizing limitations in the scope of [§ 230(c)(2)] immunity [are] persuasive," and in declining to "interpret[] the statute to give providers unbridled discretion."[32] Recognizing the link between § 230(c)(2) immunity and the "objectionable" speech discussed in the rest of the CDA can provide the bridling principle that the Ninth Circuit sought.
To be sure, this reading would recognize that § 230(c)(2) is content-based: It provides immunity for platforms' restricting, say, "excessively violent" or "lewd" material, but not for their restricting political opinions or factual assertions about elections or epidemics. For reasons discussed elsewhere, one of us thinks such a viewpoint-neutral though content-based speech protection is likely constitutional,[33] though the other is skeptical.[34]
Finally, note again that this reading is consistent with broad platform power to restrict other unwanted speech, such as spam. As we noted, by itself § 230(c)(2) doesn't limit such platform power; it only preempts state laws that would limit such power. We doubt that states would ban spam filtering, or that courts would conclude that spam filtering offends common-law tort principles. But if states choose to protect platform users against discrimination based on ideological viewpoint, § 230(c)(2) does not stand in the way.
* * *
"[O]bscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable" in § 230(c)(2), properly read, doesn't just mean "objectionable." Rather, it refers to material that Congress itself found objectionable in the Communications Decency Act of 1996, within which § 230(c)(2) resided. And whatever that might include, it doesn't include material that is objectionable on "the basis of its political or religious content."
[1] See Adam Candeub, Reading Section 230 as Written: Content Moderation and the Beggar's Democracy, 1 J. Free Speech L. __ (2021); Eugene Volokh, Social Media Platforms as Common Carriers?, 1 J. Free Speech L. __ (2021).
[2] Pub. L. No. 104-104 (1996).
[3] Washington State Dep't of Soc. & Health Servs. v. Guardianship Estate of Keffeler, 537 U.S. 371, 383-84 (2003) (distinguish reading such closing phrases "in the abstract" and thus broadly, as opposed to reading them to "embrace only objects similar in nature to those objects enumerated by the preceding specific words," and thus more narrowly (cleaned up)).
[4] Cf. Smith v. Trusted Universal Standards in Elec. Transactions, Inc., No. 09-cv-4567, 2010 WL1799456, at *6 (D.N.J. May 4, 2010); Langdon v. Google, Inc., 474 F. Supp. 2d 622 (D. Del. 2007); Pallorium, Inc. v. Jared, No. G036124, 2007 WL 80955 (Cal. Ct. App. Jan. 11, 2007); Eric Goldman, Online User Account Termination and 47 U.S.C. § 230(c)(2), 2 UC Irvine L. Rev. 659, 667 (2012).
[5] Enigma Software Group USA LLC v. Malwarebytes, 946 F.3d 1040, 1052 (9th Cir. 2019).
[6] See, e.g., Am. Fed'n of Teachers v. Ledbetter, 387 S.W.3d 360, 367 (Mo. 2012) ("act[ing] openly, honestly, sincerely" (cleaned up)); S. Indus., Inc. v. Jeremias, 66 A.D.2d 178, 183 (1978) ("deal[ing] honestly, fairly, and openly"); Gas Nat., Inc. v. Iberdrola, S.A., 33 F. Supp. 3d 373, 382 (S.D.N.Y. 2014) ("honest[] articulation of interests, positions, or understandings").
[7] Circuit City Stores, Inc. v. Adams, 532 U.S. 105, 115 (2001) (cleaned up); see also Norfolk & W. Ry. Co. v. Am. Train Dispatchers Ass'n, 499 U.S. 117, 129 (1991).
[8] Circuit City, 532 U.S. at 114.
[9] Id. at 109, 114-15.
[10] 537 U.S. 371, 375 (2003).
[11] Id. at 383-84 (citations omitted, paragraph break added).
[12] See, e.g., Song fi Inc. v. Google, Inc., 108 F. Supp. 3d 876, 883 (N.D. Cal. 2015) ("Given the list preceding 'otherwise objectionable,'—'obscene, lewd, lascivious, filthy, excessively violent, [and] harassing …'—it is hard to imagine that the phrase includes, as YouTube urges, the allegedly artificially inflated view count associated with 'Luv ya.' On the contrary, even if the Court can 'see why artificially inflated view counts would be a problem for … YouTube and its users,' MTD Reply at 3, the terms preceding 'otherwise objectionable' suggest Congress did not intend to immunize YouTube from liability for removing materials from its website simply because those materials pose a 'problem' for YouTube."); National Numismatic Certification, LLC. v. eBay, Inc., No. 6:08-cv-42-Orl-19GJK, 2008 WL 2704404, at *25 (M.D. Fla. July 8, 2008) ("It is difficult to accept, as eBay argues, that Congress intended the general term 'objectionable' to encompass an auction of potentially-counterfeit coins when the word is preceded by seven other words that describe pornography, graphic violence, obscenity, and harassment. When a general term follows specific terms, courts presume that the general term is limited by the preceding terms."); Goddard v. Google, Inc., No. C 08-2738JF(PVT), 2008 WL 5245490 (N.D. Cal. Dec. 17, 2008) (relying on National Numismatic to conclude that Google rules requiring various advertisers to "provide pricing and cancellation information regarding their services" "relate to business norms of fair play and transparency and are beyond the scope of § 230(c)(2)"); Google, Inc., v. MyTtriggers.com, 2011-2 Trade Cases ¶ 77,662 (Ohio Ct. Com. Pl.) ("The examples preceding the phrase 'otherwise objectionable' clearly demonstrate the policy behind the enactment of the statute and provide guidance as to what Congress intended to be 'objectionable' content."); Annemarie Bridy, Remediating Social Media: A Layer-Conscious Approach, 24 B.U. J. Sci. & Tech. L. 193, 209-10 (2018).
[13] Pub. L. No. 104-104 (1996).
[14] See id. sec. 501.
[15] Pub. L. No. 104-104 (1996) (emphasis added). We omit a procedural section, sec. 561.
[16] 521 U.S. 844 (1997).
[17] 47 U.S.C. § 532(h) (enacted 1984) (cable television); 47 U.S.C. § 223(a) (enacted 1968) (telephone calls).
[18] See Pub. L. 90-229 (1968) (enacting 47 U.S.C. § 223).
[19] 51 F.C.C.2d 418 (1975). For an alternative proposed common link, see Nicholas Conlon, Freedom to Filter Versus User Control: Limiting the Scope of S 230(c)(2) Immunity, U. Ill. J.L. Tech. & Pol'y, Spring 2014, at 105 ("courts should require that material be similar to the preceding terms in the respect that all of the preceding terms are characteristics that degrade the quality of the [interactive computer service] for users," whether because many parents view the described speech as making the service harmful for children, or because the described speech is "harass[ing]" and thus unduly intrudes on unwilling users).
[20] In the Matter of Violent Television Programming And Its Impact On Children, MB Docket No. 04-261, Report, (Apr. 25, 2007) at para. 5.
[21] 47 U.S.C. § 303c(c).
[22] Television Violence Report Card Act of 1996, S. Rep. 104-234, 104th Congress (1995-96).
[23] [Cite.]
[24] 438 U.S. 726, 745-46 (1978) (emphasis added).
[25] Fair Hous. Council of San Fernando Valley v. Roommates.Com, LLC, 521 F.3d 1157, 1170 (9th Cir. 2008) (discussing "Stratton Oakmont, the case Congress sought to reverse through passage of section 230"); Hassell v. Bird, 5 Cal. 5th 522, 532 (2018) (likewise).
[26] See 141 Cong. Rec. H8469 (daily ed. Aug. 4, 1995) (statement of Rep. Cox).
[27] Id.
[28] 141 Cong. Rec. H8470 (daily ed. Aug. 4, 1995) (statement of Rep. Wyden) (arguing that filtering technology is the best solution to protecting children from "smut and pornography"); id. (statement of Rep. Barton) ("There is no question that we are having an explosion of information on the emerging superhighway. Unfortunately part of that information is of a nature that we do not think would be suitable for our children to see on our PC screens in our homes."); id. (statement of Rep. Danner) ("I strongly support … address[ing] the problem of children having untraceable access through on-line computer services to inappropriate and obscene pornography materials available on the Internet"); id. (statement of Rep. White) ("I have got small children at home…. I want to be sure can protect them from the wrong influences on the Internet") (statement of Rep. White); id. (statement of Rep. Lofgren) (arguing against the Senate approach to restricting Internet pornography); id. (statement of Rep. Goodlatte) ("Congress has a responsibility to help encourage the private sector to protect our children from being exposed to obscene and indecent material on the Internet"); id. (statement of Rep. Markey) (supporting the amendment because it "dealt with the content concerns which the gentlemen from Oregon and California [Reps. Wyden and Cox] have raised," and arguing that it was superior to the Senate approach to restricting Internet pornography). The only representative who didn't discuss material that was seen as unsuitable for children was Rep. Fields, who simply congratulated his colleagues "for this fine work," id.
[29] Id. at 545-46 (paragraph break added). Justice Alito's concurrence in the judgment agreed on this score. Id. at 550. See also Circuit City Stores, Inc. v. Adams, 532 U.S. 105, 114 (2001) ("Construing the residual phrase to exclude all employment contracts fails to give independent effect to the statute's enumeration of the specific categories of workers which precedes it; there would be no need for Congress to use the phrases 'seamen' and 'railroad employees' if those same classes of workers were subsumed within the meaning of the 'engaged in … commerce' residual clause.").
[30] Some have argued that § 230(c)(1) itself protects platforms' rights to exclude whatever they wish, regardless of how § 230(c)(2) is interpreted. [Cite.] But we think that is not correct. Treating a platform as a common carrier or a place of public accommodation, or otherwise forbidding it from discriminating based on political viewpoint or other bases, isn't treating it as "the publisher or speaker" of others' information—indeed, it is the opposite, since publishers and speakers generally may exclude others' speech. See Edward Lee, Moderating Content Moderation: A Framework for Nonpartisanship in Online Governance, 70 Am. U. L. Rev. 101, pt. I.C (forthcoming 2021).
[31] Enigma Software Grp. USA, LLC v. Malwarebytes, Inc., 946 F.3d 1040, 1051-52 (9th Cir. 2019).
[32] [Cite.]
[33] [Cite.]
[34] [Cite.]
Editor's Note: We invite comments and request that they be civil and on-topic. We do not moderate or assume any responsibility for comments, which are owned by the readers who post them. Comments do not represent the views of Reason.com or Reason Foundation. We reserve the right to delete any comment for any reason at any time. Comments may only be edited within 5 minutes of posting. Report abuses.
Please
to post comments
"but what exactly does that mean?"
It means that the fascists consider individual freedom objectionable, and can and will suppress it.
Immunity will grow an enterprise. It was needed in 1996. Now, these corporations are the biggest of all, and own 90% of the market. They no longer need immunity. They have become utilities, and should be regulated as such.
Its funny that none of this regulation Dems cry about but support in every other instance would be necessary if the government and Big Tech didn't conspire to destroy independent players so they could maintain their monopoly.
Its totally a mess isn't it?
230 is actually simultaneously procensorship and anticensorship.
Drumpf wants to or at least appeared to want to remove the anticensorship portion. But signed XOs with language weakening the procensorship portion.
Thanks to Drumpf, Dems were compelled to defend 230 as a whole publicly when they actually agree pretty much completely with Drumpf's stated position and want to and are working to weaken or remove the anticensorship portion. Additionally they also want to strengthen the procensorship portion
Meanwhile other Republicans/libertarians oppose Drumpf and the Dem's private position but are allied with the Dem's public position for different reasons on tech censorship grounds. Through Trump who publicly only seemed concerned with removing the anticensorship portion they tried to weaken the procensorship portion through XO.
And the general public, thanks to Dem PR, at least until the Dems remodulate their narrative with the Biden win, think 230 is a completely good thing because it was an easier soundbite to digest when opposing Drumpf.
Me, a private party, declining to host your content, is not "procensorship".
Tell that to cakebakers.
There's nothing in Section 230 mandating cake baking. But you would agree with me that a person who declines to bake a cake for another person is not engaged in "censorship", right?
I agree we should have some consistency. If global tech/financial monopolies conspiring together with government to suppress customers/competitors they don't like, deserve to be free of any regulation than mom and pop cake shops should get the same privileges when choosing customers. Or we go full accommodations all around. None of this picking and choosing depending on what leftists like in a particular instance like we have now.
I'm ok repealing all mandatory cake-baking laws. But my understanding is the ones that have been upheld were enacted locally, not federally. While "consistency" may be a virtue, federalism requires that we're going to have some inconsistency, in part on the assumption that locals should control their own fate.
If legislation about 230 were limited to the invented facts of "global tech/financial monopolies conspiring together with government to suppress customers/competitors they don't like", then no one could object to that pointless legislation except on the grounds that Congress would find a way to fuck it up. The problem is that the hypothetical statute referenced in the OP would make Conservapedia impossible. Is there anyone who thinks that should happen?
But you would agree with me that a person who declines to bake a cake for another person is not engaged in “censorship”, right?
Why would he (or at least I) agree with you on that ? To the extent that the requested cake is to carry a message, and the baker does not want to "publish" that message on one of his cakes, and so refuses to bake the cake, then he's engaged in censorship.
OK, it's unlikely to be very succcessful censorship, and it's private censorship not government censorship, but I don't see how those factors prevent it being censorship.
Because censorship involves the suppression of speech, and me refusing to speak on your behalf is not the same thing as me preventing you from speaking.
Your concept of censorship is absurd. Every decision I make about what I promote on my private property involves a corresponding decision to censor every message I don't promote? Fine, once you've watered down censorship to "me deciding what I want to say or promote" it no longer has any normative heft. If you define away all the bad things about censorship, you no longer have anything to complain about re: censorship.
once you’ve watered down censorship to “me deciding what I want to say or promote” it no longer has any normative heft.
I'm not sure I'm in search of normative heft. I'm going for descriptive heft. And I'm OK with "censorship" including any deliberate attempt to prevent or obstruct someone from saying something.
So that would include newspapers refusing to publish articles without edits, where those edits are not simply grammar or space, but intended to obstruct the writer's meaning, or to prevent him making a point. Billboard owners refusing to take posters they disapprove off. Twitter and Facebook cancelling Trump. The Hays Code. The Hollywood blacklist. And self-censorship. And of course censorship by threat of reprisal.
In many cases, censorship is not a bad thing, and is quite consistent with the censor's liberty to use its property as it pleases.
In other cases, not so much.
Is me refusing to host Stormfront's message board on my private property a "deliberate attempt to prevent or obstruct someone from saying something"?
Sure, if the reason for your refusal is to try to obstruct Stormfront's message. But if your reason is simply that you don't want a great big message board on your property, then no.
As I say, not all censorship is wicked. We criticise it when we consider that the censor should not be doing it. This obviously applies to the government, for 1A reasons. But it also applies to private organisations, eg if they hold themselves out as bastions of free speech.
In your view is there a difference between "I don't want to be associated with Stormfront by promoting their message" and "I want to obstruct Stormfront's message"?
“I don’t want to be associated with Stormfront by promoting their message"
can, in principle, be achieved by displaying their message, and putting up another sign beside it saying "I loathe the Stormfront, love NToJ". So avoiding association-with-Stormfront-by-promoting-their-message does not necessarily imply obstructing their message.
But if the means you adopt to avoid association with Stormfront and its message is to censor their speech, eg by declining to accept their money to display their message on your board (which you generally make available for messages you approve of) then that is a case of "I want to obstruct Stormfront's message."
The obstruction is the means by which you achieve the avoidance of association.
So your two wishes are indeed not coextensive, but the one may in practice encompass the other, according to the circumstances.
Since we live in a world with finite resources and space, every decision to allow some speech is a decision to not allow other speech. I just don't think your framework makes sense, and it probably doesn't matter, since you aren't saying that there's anything wrong with me obstructing Stormfront's message by refusing to spread it.
Since we live in a world with finite resources and space, every decision to allow some speech is a decision to not allow other speech.
I'm not sure that's true. It may be true as to billboards*, and giving precedence to the invited speaker over hecklers at meetings, but there are lots of places, especially electronic ones, it doesn't apply.
For example, Reason's computer allows us to have this little exchange and in so doing it hasn't prevented anyone else saying their piece. We could go on for another fifty to and fros and no one else's speech will be affected at all.
But as you say, we are only disagreeing on the descriptive heft side of things, rather than the normative heft side of things, and we can probably dispense with those extra to and fros.
* though even for billboards, they're not all full all the time. Sometimes there may be only one person bidding for the slot.
No, that's not the right analysis. It's true that electrons are effectively infinite, but the relevant measure is attention, which is finite. Every post pushes another post out of the way of people's attention. Yes, as a purely technical matter deleting a post and burying it in a morass of other posts are different — but in terms of effect they're the same. (Every litigator knows the old trick of producing so much evidence to the other side that the crucial evidence is undetectable.)
the relevant measure is attention, which is finite
Attention is indeed finite, but no it's not the relevant measure. There are millions of books. Publishing another one does not "censor" the authors of the existing books, simply because it might draw eyeballs from their efforts.
Censorship involves deliberately interfering in the flow of communication from willing speaker to willing listener. But no one can guarantee you a willing listener.
NToJ's proposition was that every decision to allow speech was a decision not to allow other speech. But not allowing speech is quite different from allowing it, but producing some more speech that other people prefer to listen to.
I should say, for the avoidance of doubt, that there are some circumstances where generating more speech is, and is intended to be, an obstruction of speech from a willing speaker to a willing listener.
Heckling for example. Denial of service attacks, radio jamming etc.
But this interpretation doesn't require you to host squat. It simply says that you don't get the privilege of civil immunity if you delete content in bad faith, or for an inapplicable reason.
And civil immunity IS a privilege here, created by the statute. It's not a constitutional right.
"Say that a state law mandates that platforms not discriminate among their users' content based on viewpoint."
I'm talking about Section 230. The paragraph referenced has the heading, "Civil immunity", I kind of think that's what it grants.
It is, literally, censorship.
Makes sense to me.
None of this means social media platforms can't ban political speech they dislike. Nor does it mean they will be considered publishers is any given case.
It just means the law can develop more naturally, rather than being displaced by a poorly written legislative scheme from Congress in the early days of the Internet.
As I understand it, the real, underlying, issue is that the big tech media companies, like Twitter, Facebook, Google, etc have been engaging in significant viewpoint censuring, going so far as to ban former President Trump for voicing such violent and dangerous speech that, for example, there had been massive election fraud last year, and that if it had not occurred, he would have easily been re-elected. More and more, the justification for deplatforming is that the speech being censured is wrong and inaccurate, as defined by left leaning truth arbitrators and fact checkers. And the big question is whether these companies can retain their 230(c)(2) safe harbor shield from litigation, despite looking far more like publishers than common carriers, esp given how nakedly partisan these actions appear. I take EV’s article as strongly suggesting that the proper interpretation of the 230(c)(2) exclusions are very possibly not as broad as suggested by the champions of those companies.
230 gives Big Tech and the little guy safeharbor both to censor and not censor.
Dems want to destroy 230 to force everyone to censor more. But publicly defend it out of opposition to Drumpf.
Drumpf wants to destroy 230 to stop censorship, specifically of him and his supporters.
Its hilarious how they want the same thing but for opposite reasons.
This may be the first time I've 100% agreed with AmosArch, but this is spot on.
"And the big question is whether these companies can retain their 230(c)(2) safe harbor shield from litigation, despite looking far more like publishers than common carriers..."
Well they're still not publishers under 230(c)(1), regardless of how (c)(2) is interpreted.
They are publishers and speakers, when they publish or speak. For example, Twitter recently published the following statement all over its platform: "Federal law does not permit cooperating witnesses or informants to be charged with conspiracy, despite a baseless suggestion by Tucker Carlson that some of the co-conspirators of the January 6 attack on the US Capitol were not charged because they were undercover FBI agents."
But they are not the publisher of information that is posted by "another information content provider."
Query what happens when YouTube pays for the content? What if they offered some editorial services or production guidance?
Agreed that Twitter should be considered the publisher for statements it writes and publishes on the site, so Carlson is free to sue them for defamation if he wants. I doubt they're very worried.
The YouTube question is interesting, but presumably applies to a pretty small portion of the overall content.
It's an ever increasing fraction, given the proliferation of "fact checks" solicited by the platforms.
Sure, but who cares? Anyone fact checked can sue the platform if the fact check isn't correct and somehow defames them. That doesn't make them responsible for random other content on the platform.
So, if YouTube posts a fact check, and it's wrong, we can sue Google?
Thankfully for your sake, being wrong on the internet is not necessarily actionable. If YouTube posts a fact check that defames you, different story.
Oh, pumpkin....did you feel the need to throw in a sidelong insult?
AFAIK, YouTube doesn't do fact checks like Facebook and Twitter do, but purely hypothetically...
Sue them for what? If the fact check defamed someone, sure that person could sue. If you're just mad that they're wrong, you can't sue them any more than you could sue Professor Volokh if it turns out his analysis here is incorrect.
In fact, Youtube DOES fact check videos.
Sure, if they're wrong about a fact check you can't sue them any more than you can sue the Prof for being wrong about something he says about you or your comment, but not any less, either.
Well, not any less if Section 230 were properly interpreted by the courts...
If YouTube is publishing its own claims, 230 doesn’t apply.
"Isn't supposed to apply" is not the same as "Courts won't rule it applies."
The question, of course, is if Youtube hires a contractor to make claims, are they Youtube's claims? I'd say, yes.
@Brett,
One of the reasons people hire independent contractors is to avoid vicarious liability for their conduct. However, tort and defamation law already deal with this through agency, vicarious liability, etc. It's more complicated than "if YouTube hires a contractor to make claims, are they YouTube's claims?" It depends on the scope of the engagement, the hirer's contractual right of control, the hirer's actual exercise of control over the manner in which the independent contractor performs work, etc. If YouTube hires a fact-checker and instructs them to defame a party, the independent contractor relationship won't save YouTube. But if they hire a fact-checker, exercise zero control over their fact-checking process, have no right to control the fact-checking process, they probably are not going to be liable for the independent contractor's conduct.
It's more a case of hiring the contractor you know in advance will defame the people you'd want defamed, in this case.
That's just an example. It's a frequent occurrence.
I agree that Twitter is a publisher of its own speech. It's not a publisher when it hosts speech.
I don't know why YouTube paying for content would make them a publisher. (Bookstores pay for books they sell. What difference does that make?) If YouTube produces a video, it's their content.
Similar to when Forbes or BuzzFeed pays a freelance writer for some op-ed or other content - same as YouTube? Publisher or not?
It depends. It matters how the content is being distributed. Is Forbes claiming it as its own? Did Forbes edit the article for content? If YouTube merely paid someone based on how many hits they generated on YouTube, YouTube is acting like a distributor (bookstore) rather than a publisher.
"Say that a state law mandates that platforms not discriminate among their users' content based on viewpoint."
Do you think this hypothetical statute survives the (incorporated) 1A as interpreted in Tornillo?
It's not their speech right? It's just a platform. So no free speech issue.
Platforms have free speech rights, too.
As you said, Reason.com is not liable for your speech in the comments. It's your speech, not theirs. By the same token, if they are required to allow your comment, it's not compelled speech.
Requiring Reason.com to host the speech of others that they disagree with would violate Reason.com's first amendment rights. Do you think Congress can enact a law requiring all bookstores to carry President Obama's "A Promised Land"? Or do you think the First Amendment may have something to say about that legislation?
if they are required to allow your comment, it’s not compelled speech.
At first blush the implication of your claim is whenever the government requires the hosting of speech, it can't have violated the compelled speech doctrine. I strongly suspect that absurdity is not what you are arguing. But, I can't figure out what other argument you are making.
You both make good points.
I was thinking that if there is some speech or content, the government censorship of which would be an infringement on your (specifically) personal 1A free speech rights, then it must be that you are the speaker, at least in some sense. Right?
And if there is speech the government requires you to carry by some means of transmission, or to host on a server, and the requirement constitutes compelled speech in violation of your 1A rights, then again, it must be that you are the speaker. Freedom of speech protects you from being compelled to utter or otherwise express a thought with which you disagree. If you are not being compelled to speak or otherwise express something, then it's not applicable.
But, I gather that being a "speaker" in the above sense and being a "speaker" in other contexts, such as liability for defamation, may be entirely different things. At least I think that's what you're both implying. And so I assume there may be regulations that infringe on the 1A rights of certain parties, even while those same parties are not a speaker or publisher of the (either censored or compelled) speech in question for other purposes.
In the Tornillo case mentioned, setting aside questions about original meaning and incorporation, the court noted that the state law "exact[ed] a penalty" upon the newspaper, because it is an economically finite enterprise and space is limited, and this chilled speech. Makes sense to me. But -- the newspaper was the publisher/speaker of that speech, both in a 1A sense and in the sense of being liable for its content.
In Tornillo the speech they were being required to carry was an advertisement. The salient issue was the First Amendment rights of the Miami Herald in being required to carry someone else's speech.
[if] the requirement constitutes compelled speech in violation of your 1A rights, then again, it must be that you are the speaker.
Now, I understand your argument. In response quoting from Rumsfeld v. FAIR
So, I think it doesn't always follow the host is speaking. And in particular as you noted, the Tornillo was decided not based on the host speaking, but rather it affected what the host wanted to say.
It's debatable whether Twitter being forced to host a message affects what Twitter wants to say. The same could be said of a photographer who is required to take photos of a same-sex marriage.
"the complaining speaker’s own message was affected by the speech it was forced to accommodate."
Applying this to Twitter as the "complaining speaker," what would be their "own message"? And if they are the speaker and that is their own message, are they not generally liable for it?
Presumably, Twitter's message would be disapproval of the speech they do not want to host. I'm not following how they could be held liable for such disapproval.
The logic in these cases seems to be that the compelled carrying of others' speech displaces other, different speech that the party is trying to communicate.
Hypothetically, if an interactive computer service was not practically an economically finite enterprise with limited space for content, like the court in Tornillo said the newspaper was, and thus carrying someone's communications to others did not "exact a penalty" and displace and chill other speech that the interactive computer service themselves (now acting as something more than that) wanted to communicate, then this logic would not apply.
"Platform" is not a legal term, so saying that something is a "platform" has no bearing on whether the 1A applies. They have free speech rights regardless of what you call them.
Right. But if you are exercising your free speech or free press rights, doesn't that mean you are speaking or publishing?
No, not necessarily. Consider the scenario when Barnes & Noble decides not to stock Mein Kampf: it is exercising its free speech rights, but it is neither speaking nor publishing.
There may be all kinds of reasons why a government cannot require Barnes & Noble to stock Mein Kampf, starting with the question of where said government derives its purported authority to do so.
If B&N's decisions about which books to stock are protected free speech . . . emphasis on "speech" . . . well, then it's B&N's speech (or expression).
It's complicated -- I'll be excerpting the article on that very subject starting in a few days. Short version: Some compelled hosting rules are more like the mandates upheld in PruneYard, Turner Broadcasting, and Rumsfeld v. FAIR; other rules, for instance requiring platforms to include material in their "sites you might like" features, are more like the mandates struck down in Miami Herald, Hurley, and similar cases.
Looking forward to reading it.
Prediction: 'Twitter and Facebook declining to be associated with bigoted and false statements is a problem. The Volokh Conspiracy's viewpoint-driven censorship is no problem.'
Can you fix the link for the PDF? It's not working.
Whoops, fixed, thanks!
Even if ejusdem generis means "otherwise objectionable" is limited to the examples, the examples are also modified by the phrase "the provider or user considers to be". If a provider says "I find political speech I disagree with obscene" what are courts to do about it? If they inquire into the provider's subjective state of mind, they risk turning "considers to be" as mere surplusage, too.
It's almost like federal statutory schemes attempting to swallow up whole areas of law such as defamation are a bad idea.
The law of defamation never should have applied to social media in the first place (except for when they publish speech). Reason.com should no more be liable for me defaming you in the comments, than FB should be liable for its members defaming each other.
At that point they probably haul out a "reasonable man" test, and declare that claiming to find the political views of half the population literally "obscene" would be an example of bad faith. I think it's pretty clear that metaphorical obscenity doesn't cut it here.
"At that point they probably haul out a “reasonable man” test..."
That's the point. The statute contains a subjective requirement. ("the provider or user considers to be..."). If a court transformed that into an objective test, it would render the statute's subjective requirement mere surplusage.
"Good faith" also implies a subjective test. So an objective test would throw that language out, too.
What a person finds subjectively obscene is not "metaphorical".
Well, yes, I agree that 'reasonable man' tests don't render subjective requirements objective. They just imply that having eccentric subjective reactions isn't a 'get out of jail free' card.
We're talking about a form of subjectivity which is endemic in law, the implications of striking down a law for this sort of subjectivity are pretty big.
"What a person finds subjectively obscene is not “metaphorical”."
'I'm a perv who's turned on by discussing sin, that makes a video discussing the Decalogue literally, not metaphorically, obscene! So I'm not exhibiting bad faith by banning 10 commandments videos as 'obscenity'!'
Yeah, I don't think that rescues them from bad faith.
No, you don't agree, because he's saying exactly the opposite: "reasonable man" tests do render subjective requirements objective. That's literally what a reasonable man test is: it asks not what your actual view was, but what a hypothetical objective person's view would be.
That's because you don't understand the legal terms you're using.
"No, you don’t agree, because he’s saying exactly the opposite: “reasonable man” tests do render subjective requirements objective. That’s literally what a reasonable man test is: it asks not what your actual view was, but what a hypothetical objective person’s view would be."
Asking somebody else's subjective idea of what a hypothetical objective person would think isn't actual objectivity, even if it does tend to get away from the element of self-interest distorting one's subjective impressions.
I take your point, though.
But the "in good faith" language suggests that the person doing to moderating doesn't get the last word on whether their moderation is reasonable.
"But the “in good faith” language suggests that the person doing to moderating doesn’t get the last word on whether their moderation is reasonable."
Maybe it does, maybe it doesn't. I think you have to incorporate a subjective test or you're rendering "good faith" surplusage. "Good faith" speaks to the actor's sincerity and actual intent. There are dozens of federal cases applying subjective factors to a "good faith" requirement. See, e.g., Picariello v. Fenton, 491 F. Supp. 1020 (M.D. Penn. 1980).
Like I said, you don't understand the legal terms you're using. "Good faith" and "reasonable" are orthogonal concepts. "Good faith" is subjective; reasonable is objective.
Let's say that you sue me, claiming I fired you because you were Methodist. I say, "No, I fired him because he broke a company rule." Whether my decision was reasonable depends on what my basis was for thinking that, and whether the offense was proportionate to the firing.
Whether my decision was in good faith depends on whether I truly believed that you had broken a rule, or whether that was just a pretext to purge the Methodists. It doesn't matter whether I was right about what you had done. Nor does it matter if my decision was rational, whether I gave you a chance to explain yourself, whether I jumped to conclusions, whether I relied on unreliable evidence, etc. All those speak to whether I was reasonable, but are irrelevant to my good faith; if the standard is the latter, a neutral arbiter doesn't get to consider reasonableness at all.
"Well, yes, I agree that ‘reasonable man’ tests don’t render subjective requirements objective."
This is exactly what a "reasonable man" test does. If you use a "reasonable man" test rather than actual, subjective knowledge, the standard is objective and not subjective. If courts did that with 230, they would be disregarding the subjective tests included in the statute, rendering them surplusage. Courts are not supposed to do that, see the OP at Section D.
I'd be interested in hearing how the OP would resolve this issue. I see a tension between the OP's application of ejusdem generis and the subjective test.
As I said above, substituting somebody else's subjective opinion doesn't render the test objective. At best it renders it disinterested.
"subjective" and "objective" are terms of art. For example, a statute that requires the defendant to pay damages if he had actual knowledge of the plaintiff's injuries would be subject to a subjective test. Whether he should have known (objective test) is irrelevant. A statute that requires the defendant to pay damages if he should have known or reasonably should have known is an objective test. Whether the defendant actually knew of the injuries is irrelevant.
The very purpose of the objective test is to render a subjective intend irrelevant. Same for subjective. (Some statutes, rules, etc. require both.)
Eugene makes a strong case. However, I suspect the ultimate fate of his hypothetical state law does not depend on the statutory interpretation of 230, but instead on whether social media platforms are common carriers.
Assuming Eugene is correct, the state law still must pass First Amendment scrutiny which I think will require courts to view social media platforms as common carriers. Otherwise, the state law likely fails the compelled speech doctrine.
On the other hand if Eugene is incorrect and 230 preempts the state law, the state will argue 230 as applied violates the First Amendment because 230 reduces protections for speech. But that conclusion assumes the speech that needs protection is user speech, which in turn likely requires viewing social media platforms as common carriers.
How does 230 "reduce[] protections for speech"?
The hypothetical state law prohibits viewpoint-based censorship of use speech. If Section 230 preempts that law, more user speech will be censored and thus 230 reduces protection for user speech.
1) Private removal of speech is not "censorship".
2) Even under that warped definition of censorship, there is no guarantee that the hypothetical state law would lead to "more user speech". If providers/platforms have to choose between permitting all speech (including, for example, Stormfront) regardless of viewpoint, or just not hosting speech anymore, many will stop hosting speech entirely. As an example, fewer people would engage in speech on reddit if state law prohibited enforcement of subreddit standards.
Eugene has detailed how a state law could be more protective of speech.
Seeing as how the very thing we're debating is whether social media's First Amendment rights would be violated by a must-carry provision, the following assumption in Professor Volokh's post tells me it's not germane to what we're talking about, here.
"Assume also that such an Act wouldn't itself violate the social media network's First Amendment rights..."
I'm not debating whether social media sites have First Amendment rights that would be violated by a must-carry provision (leave that for when Eugene posts on that topic in a few days).
"Private removal of speech is not “censorship”."
No, it's not government censorship. But it's still censorship.
So your refusal to post my comments in your living room is an act of censorship?
If I'd set up my living room as a public forum where people could post and read comments without any initiative on my part being necessary? (People have gotten funding for stupider 'art installations'.)
Yes, that would be an act of censorship.
If you broke into my living room to stick postit notes on the wall, that would be different.
How would it be different in terms of whether it's censorship by your definition above? It would involve "private removal of speech."
And how does the "without any initiative on my part being necessary" come into play, by the way? If I set up a forum and said that no posted comments will appear until I review them and approve them, and then I systematically only approve anti-Trump ones, all the usual suspects would say that I was censoring pro-Trump speech.
It's different in that I wouldn't be censoring speech, I'd be cleaning up the results of a break in. Presumably I'd take down postit notes left by any burglar, not just you.
This is an argument that your censorship (by your definition of the term: "private removal of speech") is justified, not that it isn't the private removal of speech.
But I will say that matters would be somewhat different if FaceBook or Youtube just came out and publicly announced, "We are not a public forum, we are only open to left-wing political views. Conservatives, go away!"
They haven't done that because they'd lose too many of their users if they did. So they pretend that they're non-partisan, and lie about the basis for their censorship. That's the bad faith element.
FB has already said that it is not a public forum and that it reserves the right to delete any content for any reason.
And I reiterate that this would be a drastic rewrite of doctrine. Social media sites do not have the characteristics of common carriers.
I hope we will get Eugene's typically fair overview of existing law (knowing he is now leaning towards treating social media sites as common carriers) in the next article he alludes to in his above reply.
In his comment he referenced Pruneyard, Turner Broadcasting, and Rumsfeld v. FAIR.
Pruneyard has nothing to do with common carriers. It had to do with common areas under California's unique state constitutional grant of an affirmative right of free speech, and has since been narrowed by the California Supreme Court.
Turner Broadcasting is wrong, but did not hold anything re: common carriers. The only hypothetical reference to common carriers was in a concurrence by O'Connor. In any event, the Court held that a must-carry provision was content-neutral and so subject to intermediate (rather than strict) scrutiny, because the rules distinguished between the types of speakers based on how they transmitted messages, and not the messages they carried. The state law hypothesized in the OP is not content-neutral in that manner. (There are other differences between the must-carry rule wielded against cable operators and any law used against social media. As an example, social media companies do not have a legislatively granted "monopoly over cable service" under the 1992 Cable Act.
I don't see any reason to explain why Rumsfeld had nothing to do with common carriers.
>> "Nor did the terms appear in the CDA by happenstance; rather, they all referred to material that had long been seen by Congress as of 1996 as objectionable and regulable within telecommunications media"
Exactly! Many of the folks involved in drafting § 230(c)(2) are still alive... and I'd bet that their personal position would be that the intent was to change nothing; that is, at the time, the thinking was that "new media" was simply a new wrapping and that "tried-and-true old ways" continued to be appropriate.
>> "Those findings didn't discuss encouraging broader blocking (by online providers, as opposed to by users) of offensive or dangerous political ideas."
True again. Probably wasn't even a thought.
>> "every legislator who spoke substantively about § 230 focused on freeing platforms to block material that was seen as not 'family-friendly'."
Yup. With memories of Ronald Reagan.
>> "It provides immunity for platforms' restricting, say, 'excessively violent' or 'lewd' material, but not for their restricting political opinions or factual assertions about elections or epidemics."
That likely was indeed the intent, even if such intent is ultimately found to be unconstitutional. Legislators, particularly young ones, aren't constitutional scholars -- and do not (and perhaps should never) attempt or pretend to be.
You can listen to the authors of 230 talk about it here: https://www.techdirt.com/articles/20210302/13091746348/techdirt-podcast-episode-272-section-230-matters-with-ron-wyden-chris-cox.shtml
The ejusdem generis interpretation makes sense, and I think most that websites would still function well. The censorship/free speech fights would shift from "I'm banned because of my viewpoint" to "I'm not harassing, they banned me for disagreeing".
OTOH, I find it annoyingly pedantic when jurists and scholars dissect the legal sausage in great detail: "This bit of gristle is next to that mysterious tube, so it was surely placed there for a reason." While section 230 was carefully and thoughtfully written, from what I understand about the part of the CDA that was struck down (everything else), it was a lot more sausage than steak. I don't find it quite as convincing to interpret 230 by referring to those sections. IIRC, Cox & Wyden have said that they expected Exon's stuff to be struck down right away, but there was too much "think of the children" angst for anyone to take it out of the bill.
Hear, hear! Go get 'em, Eugene.
They key aspects are this.
1. Typically a publisher had the right to say whatever they wanted. They also had responsibility for what was said. So, if something was libelous, they could be sued. If the publisher was large enough to have decent circulation, and they published something libelous, they would be sued.
2. Individuals also have the right to say whatever they want. They also have responsibility. Often however, it doesn't make sense to sue them for libel, as they don't have the funds to cover court costs.
3. With a large "publisher" like Twitter for example, there's a paradigm shift. Because of section 230, they cannot be liable for items people write. So, there can be libelous accusations on Twitter, but Twitter can't be sued. Only the individual can, if they can be identified. And often it's not worth it. This creates a skew, but still it's even handed.
4. With the selective censorship on Twitter however, Twitter can ensure that libelous or misleading content they don't like is eliminated, while libelous or misleading content they DO like is kept. This creates a skew where Twitter can effectively "lie" using the power of a large publisher to reach a large audience. Meanwhile lies Twitter doesn't like are eliminated. This is the issue.
I'd say that's a decent summation.
I'd only add that traditional publishing outlets such as newspapers have long used a variation of this: If a newspaper wants people to believe a lie, they locate somebody telling that lie, and report on their telling it, without disclosing that they know it to be a lie.
Then they defend having done so as accurate, on the basis that they were simply accurately recounting what somebody else had said. They can usually get away with this by sticking to matters that are "public". It was pretty unusual that Sandmann actually won against CNN and the WaPo, took a lot of factors lining up including the news outlets being really egregious about sticking with their false accounts even after they had evidence they were false.
You can draw up the most complicated Venn diagram ever, but Twitter is fundamentally not a newspaper, not a publisher in any meaningful sense.
A newspaper's reporters are employees of the newspaper and are writing on behalf of the newspaper. Twitter does not write tweets, period.
If you tweet something libelous, you can be sued, but Twitter cannot. That is expressly the intent of 230(c) and absolutely the correct policy, regardless of whatever curating Twitter does.
So your complaint is, "They need to let me defame other people in order to be evenhanded"?
Professor,
For the OP, I also think the specific statutory language does not lend itself to ejusdem generis in the first place. A textbook example of ejusdem generis: "lions, tigers, and other animals." Ejusdem generis may inform us that "other animals" means four-legged animals, wild cats, wild animals, cats, mammals, etc. It probably doesn't mean pet lizards.
But 230(c)(2) doesn't use "or other objectionable material". It says "otherwise objectionable". The reason ejusdem generis works with "other" is that one definition of "other" is "additional or further". There is no adverb (or adjective for that matter) definition of "otherwise" that contemplates additional or further limitation. All definitions of "otherwise" refer to "other circumstances" or "in another manner; differently" or "in other respects".
Put differently, in a traditional ejusdem generis application the term "other" is a term of enlargement subject to the class listed. However, the term "otherwise" doesn't operate like this; it's a term of enlargement that specifically distinguishes itself from what is previously stated. "otherwise objectionable" cannot be read as being modified by the previous list, because it excludes itself from the preceding list. If something is objectionable because it is obscene, it cannot be "otherwise objectionable" without being objectionable on different grounds.
For an example, see U.S. v. Baranski, 484 F.2d 556 (7th Cir. 1973) ("[W]e conclude that the doctrine [of ejusdem generis] is not applicable here" in part because "The preceding terms substantially embrace the characteristic which they both describe, thereby indicating that the phrase “or otherwise” would take a meaning beyond the class.") See also Gooch v. United States, 297 US 124 (1936) (holding that even if enumerated category of "reward" was not broad enough to cover defendant's conduct, "the broad term, 'otherwise'" was sufficient to include that conduct).
"obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable"
The "otherwise" was used because the list included two categories of objectionable content. The obscene category, and the violent category. This still illustrates a sort of "objectionable". Eugene points out that the larger statute, the Communications Decency Act, focused on two forms of objectionable content, the obscene, and the violent/harassing content.
It really is not a plausible reading that they were authorized to moderate obscenity, harassment, and anything they damned well pleased. The catch-all really does catch all here, it makes the rest of the list redundant, if they'd meant that they could have just said, "anything found objectionable", and given no examples.
That doesn't resolve the problem. If the statute said: "obscene, violent, or otherwise objectionable" content the problem would remain. The use of "otherwise" (as opposed to other) suggests an intent to really mean a wholly different class than the enumerated list. Looking to the broader CDA doesn't help, either. Because "otherwise" would still operate to except "otherwise objectionable" from the broader act, too.
"The catch-all really does catch all here, it makes the rest of the list redundant..."
There's going to be a redundancy in your interpretation. "otherwise objectionable" cannot be given its plain meaning if it used as an associative rather than distinctive term. The plain meaning demonstrates that the statute applies to good faith in categories 1, 2, or anything otherwise objectionable to the provider. And since there's a subjective element, that places a necessary limit on "otherwise objectionable" that does not require redundancy.
Put differently: a provider cannot be held liable if they remove "harassing" speech even if the provider does not personally find it objectionable. The "otherwise objectionable" therefore operates to distinguish a different class of things that they are not liable for regulating.
This is the correct interpretation. Thank you.
Or to put it another way, the ejusdem generis reading treats 230(c)(2) as saying, "obscene, lewd, lascivious, filthy, excessively violent, harassing, or similarly objectionable." But "otherwise" is not synonymous with "similarly"; in fact, they mean pretty different things.
Indeed.
People keep grasping for meaning in "otherwise" that just isn't there.
At most, it can be read to limit all of the covered content to that which is considered in good faith to be "objectionable." The listing of "obscene, lewd, lascivious, filthy, excessively violent, and harassing" might be superfluous today, as those things are all widely considered objectionable, but Congress wrote the statute to apply forever.
Why is nobody interested in the "voluntary" element?
Does a covered action take in good faith or otherwise (see what I did there?) become subject to liability because it was compelled by state law?
I have a question for the authors. I am not used to legal arguments, but I was wondering about your opinion that Section 230 not only doesn't cover false statements by users and protects states to forbid such censorship, but that it implies specifically that false statements are not part of the exemption. In the context of libel law, providing an exemption to allow censorship of lewd materials without taking liability for libel in hosted content NOT censored, seems like it would necessarily be excluded from Section 230; if it were not then the purpose of the law is moot; the exception may as well be the entire law: "Content hosts cannot be held liable for libelous content." Is that a fair reading?