The Volokh Conspiracy

Mostly law professors | Sometimes contrarian | Often libertarian | Always independent

Free Speech

Interpreting 47 U.S.C. § 230(c)(2)

The statute immunizes computer services for "action voluntarily taken in good faith to restrict ... availability of material that the provider ... considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable, whether or not such material is constitutionally protected"—but what exactly does that mean?

|

In a few weeks, the Journal of Free Speech Law will publish its inaugural symposium, on free speech and social media platforms. Prof. Adam Candeub (Michigan State) and I will be writing our own articles for the symposium, as will several other scholars. But Adam and I will also have a joint piece on one specific question—how 47 U.S.C. § 230(c)(2) should be interpreted. Here's a very rough draft (not yet cite-checked and proofread), which you can also read in PDF; I'd love to hear people's view on it.

I should note that my initial reading of the statute (which I had expressed at some conferences, though not in any articles) was different than it is now; it roughly matched what we discuss below in Part I, which I now think isn't correct. Many thanks to Adam to talking me around on the subject.

[* * *]

Title 47 U.S.C. § 230(c)(2) states:

No provider or user of an interactive computer service shall be held liable on account of … any action voluntarily taken in good faith to restrict access to or availability of material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable, whether or not such material is constitutionally protected.

Say that a state law mandates that platforms not discriminate among their users' content based on viewpoint. Set aside for now whether such a mandate is good policy, whether it's consistent with the First Amendment, and whether it's consistent with the Dormant Commerce Clause.[1] Is such a state law preempted by § 230(c)(2)?

We think the answer is "no." Section 230(c)(2) was enacted as sec. 509 of the Communications Decency Act of 1996 (CDA),[2] and all the terms before "otherwise objectionable"—"obscene, lewd, lascivious, filthy, excessively violent, harassing"—refer to speech that had been regulated by the rest of the CDA, and indeed that had historically been seen by Congress as particularly regulable when distributed via electronic communications. Applying the ejusdem generis canon, "otherwise objectionable" should be read as limited to material that is likewise covered by the CDA.

Restrictions on, for instance, speech that entices children into sexual conduct (discussed in sec. 508 of the CDA) could be seen as restrictions on "otherwise objectionable" speech. The same may be true of restrictions on anonymous speech said with intent to threaten (discussed in sec. 502 of the CDA). But restrictions on speech that is outside the CDA's scope should not be seen as immunized by § 230(c)(2). That is particularly true of restrictions on speech on "the basis of its political or religious content"—restrictions expressly eschewed by sec. 551 of the CDA, which distinguished them from regulations of "sexual, violent, or other indecent material."

Naturally, nothing in our reading would itself prohibit platforms from blocking material. By default, they are free to do so, absent some affirmative hosting obligation that some state or federal law imposes on them; § 230(c)(2) is not such an affirmative obligation. But if a state wants to ban viewpoint discrimination by platforms, § 230(c)(2) does not preempt that choice.

[I.] One Interpretation: "Otherwise Objectionable" as a Catch-All

To begin with, we want to acknowledge the alternative interpretation: that "otherwise objectionable" is basically a catch-all phrase that should be read broadly and "in the abstract,"[3] referring to anything that the platform sincerely objects to.

Deleting posts that the service views as hateful, false, or dangerous—or banning users who put up such posts—would then be immunized by § 230(c)(2): The service would be "in good faith" "restrict[ing] access to … material that" it "considers to be … otherwise objectionable." Ideologically objectionable speech, the argument goes, remains "objectionable"; and the question is whether the provider "considers [it] to be" objectionable, not whether it's objectionable in some objective sense.[4]

Perhaps deleting a post just because it comes from a competitor, and using insincere claims of ideological objection to cover for such anticompetitive behavior, might be "bad faith."[5] Similarly, perhaps a pattern of dishonest explanation of the basis for removal—for instance, referring to facially neutral terms of service while covertly applying them in a viewpoint-discriminatory way—might be inconsistent with "good faith," which is often defined as requiring an honest explanation of one's position.[6] But there is no absence of "good faith," the argument would go, in sincerely objecting to particular ideas.

[II.] Ejusdem Generis

[A.] "Similar in Nature"

We think, though, that the better approach is to apply the ejusdem generis interpretive canon:

Where general words follow specific words in a statutory enumeration, the general words are construed to embrace only objects similar in nature to those objects enumerated by the preceding specific words.[7]

This is a commonly applied rule. Consider, for instance, the Federal Arbitration Act's exemption for "contracts of employment of seamen, railroad employees, or any other class of workers engaged in foreign or interstate commerce." That could be read, if one is interpreting the words "any other" in the abstract, as covering "all [employment] contracts within the Congress' commerce power,"[8] or at least any workers engaged more directly in foreign or interstate commerce, such as workers at hotels, people who do telephone sales, and the like. But the Court instead applied ejusdem generis to read "any other class of workers" as covering only employment contracts of transportation workers, by analogy to the preceding terms ("seamen" and "railroad employees"):

The wording of [the statute] calls for the application of the maxim ejusdem generis …. Under this rule of construction the residual clause should be read to give effect to the terms "seamen" and "railroad employees," and should itself be controlled and defined by reference to the enumerated categories of workers which are recited just before it; the interpretation of the clause pressed by respondent [as a catch-all covering all employees engaged in interstate or foreign commerce writ large] fails to produce these results.[9]

Likewise, consider Washington State Dep't of Soc. & Health Servs. v. Guardianship Estate of Keffeler, which interpreted a statute protecting Social Security benefits from "execution, levy, attachment, garnishment, or other legal process."[10] The Court reasoned,

[T]he case boils down to whether the department's manner of gaining control of the federal funds involves "other legal process," as the statute uses that term. That restriction to the statutory usage of "other legal process" is important here, for in the abstract the department does use legal process as the avenue to reimbursement: by a federal legal process the Commissioner appoints the department a representative payee, and by a state legal process the department makes claims against the accounts kept by the state treasurer.

The statute, however, uses the term "other legal process" far more restrictively, for under the established interpretative canons of noscitur a sociis and ejusdem generis, "'[w]here general words follow specific words in a statutory enumeration, the general words are construed to embrace only objects similar in nature to those objects enumerated by the preceding specific words.'" Thus, "other legal process" should be understood to be process much like the processes of execution, levy, attachment, and garnishment, and at a minimum, would seem to require utilization of some judicial or quasi-judicial mechanism, though not necessarily an elaborate one, by which control over property passes from one person to another in order to discharge or secure discharge of an allegedly existing or anticipated liability.[11]

"[O]therwise objectionable" in § 230(c)(2), then, should not be read "in the abstract" as simply referring to anything that an entity views as in some way objectionable. Rather, it should be read as objectionable in ways "similar in nature" to the ways that the preceding terms are objectionable.[12]

[B.] The Common Link Between the § 230(c)(2) Terms

And the "nature" of the terms is revealed by the nature of the Act that included them. The provision codified at 47 U.S.C. § 230 wasn't a standalone statute: It was section 509 of the Communications Decency Act, the Act that in turn formed Title V of Telecommunications Act of 1996.[13] True to its name, the Telecommunications Act dealt with a wide range of telecommunications technology, mostly the familiar media of telephone communications, broadcast television, and cable television, but also the then-new medium of Internet technology. The Communications Decency Act likewise dealt with the same range of telecommunications media. And the table of contents of the CDA[14] is particularly telling:

TITLE V—OBSCENITY AND VIOLENCE

Subtitle A—Obscene, Harassing, and Wrongful Utilization of Telecommunications Facilities
Sec. 501. Short title.
Sec. 502. Obscene or harassing use of telecommunications facilities under the Communications Act of 1934 [the text of this also covered "obscene, lewd, lascivious, [and] filthy" speech].
Sec. 503. Obscene programming on cable television.
Sec. 504. Scrambling of cable channels for nonsubscribers.
Sec. 505. Scrambling of sexually explicit adult video service programming.
Sec. 506. Cable operator refusal to carry certain programs [containing obscenity, indecency, or nudity].
Sec. 507. Clarification of current laws regarding communication of obscene materials through the use of computers.
Sec. 508. Coercion and enticement of minors.
Sec. 509. Online family empowerment [this became § 230].

Subtitle B—Violence
Sec. 551. Parental choice in television programming [mostly focused on "violent" and sexually themed programming].
Sec. 552. Technology fund [focused on empowering parents to block programming].[15]

(Two parts of one section, sec. 502, were struck down in Reno v. ACLU (1997),[16] but they too reflect what Congress in 1996 viewed as objectionable, and tried to regulate, even if unsuccessfully.) The similarity among "obscene, lewd, lascivious, filthy, excessively violent, [and] harassing" thus becomes clear: All refer to speech regulated in the very same Title of the Act, because they all had historically been seen by Congress as regulable when distributed via electronic communications.

Nor did the terms appear in the CDA by happenstance; rather, they all referred to material that had long been seen by Congress as of 1996 as objectionable and regulable within telecommunications media (even if some of them were "constitutionally protected" in the abstract, outside the telecommunications context):

[1.] "Obscene, lewd, lascivious, and filthy" speech had been regulated on cable television and in telephone calls.[17]

[2.] "Harassing" material telephone calls had also long been seen by Congress as regulable.[18]

[3.] The reference to "excessively violent" speech was part of a longer tradition of yoking "[e]xcessively violent" and indecent or "obscene" speech in discussions of regulating over-the-air broadcasting. This tradition goes back at least to the FCC's 1975 Report on the Broadcast of Violent, Indecent, and Obscene Material.[19] It endured at least until 2007, when the FCC concluded that, though "violent content is a protected form of speech under the First Amendment," "the government interests at stake, such as protecting children from excessively violent television programming, are similar to those which have been found to justify other content-based regulations."[20]

Likewise, the Television Program Improvement Act of 1990 exempts from antitrust laws any discussions or agreements related to "voluntary guidelines designed to alleviate the negative impact of violence in telecast material."[21] Throughout the 1990s, other bills in Congress singled out violent material, for instance by establishing a "Television Violence Report Card."[22] Such restrictions were ultimately rejected by Brown v. Entertainment. Merchants Ass'n (2011),[23] but that case was still 15 years in the future when § 230 was enacted.

[C.] "Political or Religious Content"

Section 230(c)(2) is thus best read as immunizing Internet companies' private enforcement of rules analogous to restrictions on "obscene, lewd, lascivious, filthy, excessively violent, [or] harassing" communications—not to enforcement of completely different restrictions that the companies might make up. Using this understanding, "otherwise objectionable" might thus cover other materials discussed elsewhere in the CDA, for instance anonymous threats (sec. 502), unwanted repeated communications (sec. 502), nonlewd nudity (sec. 506), or speech aimed at "persuad[ing], induc[ing], entic[ing], or coerc[ing]" minors into criminal sexual acts (sec. 508).

But "otherwise objectionable" would not cover speech that is objectionable based on its political content, which Congress did not view in 1996 as more subject to telecommunications regulation, and didn't try to regulate elsewhere in the CDA. And this fits the logic of the rest of § 230. The subsection, which was titled "Online Family Empowerment" within the Act, is focused on increasing user control and encouraging providers to create environments free from overly sexual, violent, or harassing material. The policy findings in § 230(b) expressly mentioned

  • user self-help technologies that would "maximize user control" over what they receive,
  • "blocking and filtering technologies that empower parents to restrict their children's access to objectionable or inappropriate online material" (which fits the statutory subsection title, "online family empowerment"), and
  • "vigorous enforcement of Federal criminal laws to deter and punish trafficking in obscenity, stalking, and harassment by means of computer."

Those findings didn't discuss encouraging broader blocking (by online providers, as opposed to by users) of offensive or dangerous political ideas.

Indeed, the one reference in § 230 outside § 230(c)(2) to "objectionable" came in the policy recital supporting parental control via "blocking and filtering technologies"—and the other provision of the CDA that facilitated parental control via blocking and filtering technologies was the provision for violence and sex ratings of television programs (sec. 551), which expressly rejected attempts to restrict "objectionable" political speech. Sec. 551 said that the FCC should "[p]rescribe" guidelines and recommended procedures for the identification and rating of video programming that contains sexual, violent, or other indecent material about which parents should be informed before it is displayed to children: Provided, That nothing in this paragraph shall be construed to authorize any rating of video programming on the basis of its political or religious content …. [Emphasis added.]

And this in turn fits the Supreme Court's approach to regulation of broadcast communications: Consider FCC v. Pacifica Foundation, where Justice Stevens' lead opinion approved of the regulation of "indecent" speech, but only because such a regulation wasn't seen as targeting political content:

[I]f it is the speaker's opinion that gives offense, that consequence is a reason for according it constitutional protection. For it is a central tenet of the First Amendment that the government must remain neutral in the marketplace of ideas. If there were any reason to believe that the Commission's characterization of the Carlin monologue as offensive could be traced to its political content—or even to the fact that it satirized contemporary attitudes about four-letter words—First Amendment protection might be required.[24]

It's thus unsurprising that, when Congress gave specific examples in § 230(c)(2) of "objectionable" material that platforms could block with immunity, it offered examples of material that was objectionable for reasons unrelated to "political … content." And § 230(a)(3)'s extolling the Internet as "offer[ing] a forum for a true diversity of political discourse" is consistent with sec. 551's distinction between filtering of "sexual" or "violent" material (which Congress sought to encourage) and filtering of "political or religious content" (which Congress expressly renounced an intent to encourage).

The record of the Congressional hearings on § 230 supports this reading. Congress passed Section 230 to overrule a New York state case, Stratton Oakmont v. Prodigy,[25] which held that an online forum's decision to engaged in content moderation (aimed at providing a "family-oriented" environment) made the forum liable as a publisher for defamatory material posted by users. That holding naturally strongly deterred any such content moderation.

In the brief legislative history, every legislator who spoke substantively about § 230 focused on freeing platforms to block material that was seen as not "family-friendly." For instance, Representative Cox, one of the bill's sponsors, explained that section 230 would give parents the ability to shield their children from "offensive material … that our children ought not to see…. I want to make sure that my children have access to this future and that I do not have to worry about what they might running into online. I would like to keep that out of my house and off of my computer. How should we do this?"[26] "We want to encourage [internet services] … to do everything possible for us, the customer, to help us control, at the portals of our computer, at the front door of our house, what comes in and what our children see."[27] Other legislators took the same view.[28]

[D.] Avoiding "Misleading Surplusage"

This ejusdem-generis-based reading of § 230(c)(2) also explains why Congress listed a specific set of blocking decisions for which it provided immunity, rather than just categorically immunizing all "action voluntarily taken in good faith to restrict access to or availability of material that the provider or user considers to be objectionable, whether or not such material is constitutionally protected."

Justice Ginsburg's plurality opinion in Yates v. United States offers a helpful analogy here. The Court in Yates was interpreting a ban on (among other things) altering, destroying, concealing, or covering up "any record, document, or tangible object with the intent to impede, obstruct, or influence" a federal investigation. The question was whether a fisherman's throwing overboard an illegally caught fish—in an attempt to keep inspectors from seeing it—qualified. Read in the abstract, the statute should have covered this: a fish is about as tangible an object as you can get. But the Court disagreed:

In Begay v. United States, 553 U.S. 137, 142-143 (2008), for example, we relied on this principle to determine what crimes were covered by the statutory phrase "any crime … that … is burglary, arson, or extortion, involves use of explosives, or otherwise involves conduct that presents a serious potential risk of physical injury to another," The enumeration of specific crimes, we explained, indicates that the "otherwise involves" provision covers "only similar crimes, rather than every crime that 'presents a serious potential risk of physical injury to another.'" Had Congress intended the latter "all encompassing" meaning, we observed, "it is hard to see why it would have needed to include the examples at all." See also CSX Transp., Inc. v. Alabama Dept. of Revenue, 562 U.S. 277, 295 (2011) ("We typically use ejusdem generis to ensure that a general word will not render specific words meaningless.").

Just so here. Had Congress intended "tangible object" in § 1519 to be interpreted so generically as to capture physical objects as dissimilar as documents and fish, Congress would have had no reason to refer specifically to "record" or "document." The Government's unbounded reading of "tangible object" would render those words misleading surplusage.[29]

Likewise, reading "otherwise objectionable" "generically," as covering anything to which someone objects, would "render [obscene, lewd, lascivious, filthy, excessively violent, and harassing] misleading surplusage."

[E.] Comparing § 230(c)(2) with § 230(c)(1)

And the ejusdem-generis-based reading fits the difference between § 230(c)(1) and § 230(c)(2):

(1) Treatment of publisher or speaker
No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.

(2) Civil liability
No provider or user of an interactive computer service shall be held liable on account of—
(A) any action voluntarily taken in good faith to restrict access to or availability of material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable, whether or not such material is constitutionally protected; or
(B) any action taken to enable or make available to information content providers or others the technical means to restrict access to material described in paragraph (1).

Section 230(c)(1) is broad, and lacks any enumeration of specific kinds of speech or specific torts. It doesn't, for instance, say that no provider shall be treated as publisher or speaker "for purposes of libel law, invasion of privacy law, negligence law, or other causes of action." But § 230(c)(2) deliberately enumerated a list, suggesting that Congress understood it as immunizing only certain kinds of platform-imposed speech restrictions.[30]

The Ninth Circuit expressed doubt about the application of ejusdem generis to § 230(c)(2), on the theory that "the specific categories listed in § 230(c)(2) vary greatly: Material that is lewd or lascivious is not necessarily similar to material that is violent, or material that is harassing. If the enumerated categories are not similar, they provide little or no assistance in interpreting the more general category."[31]

But we think the court missed the link we describe above: violent, harassing, and lewd material is indeed similar, in that it had long been seen—including in the rest of the Communications Decency Act, in which § 230(c)(2) was located—as regulable when said through telecommunications technologies. The court was correct in concluding that "decisions recognizing limitations in the scope of [§ 230(c)(2)] immunity [are] persuasive," and in declining to "interpret[] the statute to give providers unbridled discretion."[32] Recognizing the link between § 230(c)(2) immunity and the "objectionable" speech discussed in the rest of the CDA can provide the bridling principle that the Ninth Circuit sought.

To be sure, this reading would recognize that § 230(c)(2) is content-based: It provides immunity for platforms' restricting, say, "excessively violent" or "lewd" material, but not for their restricting political opinions or factual assertions about elections or epidemics. For reasons discussed elsewhere, one of us thinks such a viewpoint-neutral though content-based speech protection is likely constitutional,[33] though the other is skeptical.[34]

Finally, note again that this reading is consistent with broad platform power to restrict other unwanted speech, such as spam. As we noted, by itself § 230(c)(2) doesn't limit such platform power; it only preempts state laws that would limit such power. We doubt that states would ban spam filtering, or that courts would conclude that spam filtering offends common-law tort principles. But if states choose to protect platform users against discrimination based on ideological viewpoint, § 230(c)(2) does not stand in the way.

* * *

"[O]bscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable" in § 230(c)(2), properly read, doesn't just mean "objectionable." Rather, it refers to material that Congress itself found objectionable in the Communications Decency Act of 1996, within which § 230(c)(2) resided. And whatever that might include, it doesn't include material that is objectionable on "the basis of its political or religious content."

[1] See Adam Candeub, Reading Section 230 as Written: Content Moderation and the Beggar's Democracy, 1 J. Free Speech L. __ (2021); Eugene Volokh, Social Media Platforms as Common Carriers?, 1 J. Free Speech L. __ (2021).

[2] Pub. L. No. 104-104 (1996).

[3] Washington State Dep't of Soc. & Health Servs. v. Guardianship Estate of Keffeler, 537 U.S. 371, 383-84 (2003) (distinguish reading such closing phrases "in the abstract" and thus broadly, as opposed to reading them to "embrace only objects similar in nature to those objects enumerated by the preceding specific words," and thus more narrowly (cleaned up)).

[4] Cf. Smith v. Trusted Universal Standards in Elec. Transactions, Inc., No. 09-cv-4567, 2010 WL1799456, at *6 (D.N.J. May 4, 2010); Langdon v. Google, Inc., 474 F. Supp. 2d 622 (D. Del. 2007); Pallorium, Inc. v. Jared, No. G036124, 2007 WL 80955 (Cal. Ct. App. Jan. 11, 2007); Eric Goldman, Online User Account Termination and 47 U.S.C. § 230(c)(2), 2 UC Irvine L. Rev. 659, 667 (2012).

[5] Enigma Software Group USA LLC v. Malwarebytes, 946 F.3d 1040, 1052 (9th Cir. 2019).

[6] See, e.g., Am. Fed'n of Teachers v. Ledbetter, 387 S.W.3d 360, 367 (Mo. 2012) ("act[ing] openly, honestly, sincerely" (cleaned up)); S. Indus., Inc. v. Jeremias, 66 A.D.2d 178, 183 (1978) ("deal[ing] honestly, fairly, and openly"); Gas Nat., Inc. v. Iberdrola, S.A., 33 F. Supp. 3d 373, 382 (S.D.N.Y. 2014) ("honest[] articulation of interests, positions, or understandings").

[7] Circuit City Stores, Inc. v. Adams, 532 U.S. 105, 115 (2001) (cleaned up); see also Norfolk & W. Ry. Co. v. Am. Train Dispatchers Ass'n, 499 U.S. 117, 129 (1991).

[8] Circuit City, 532 U.S. at 114.

[9] Id. at 109, 114-15.

[10] 537 U.S. 371, 375 (2003).

[11] Id. at 383-84 (citations omitted, paragraph break added).

[12] See, e.g., Song fi Inc. v. Google, Inc., 108 F. Supp. 3d 876, 883 (N.D. Cal. 2015) ("Given the list preceding 'otherwise objectionable,'—'obscene, lewd, lascivious, filthy, excessively violent, [and] harassing …'—it is hard to imagine that the phrase includes, as YouTube urges, the allegedly artificially inflated view count associated with 'Luv ya.' On the contrary, even if the Court can 'see why artificially inflated view counts would be a problem for … YouTube and its users,' MTD Reply at 3, the terms preceding 'otherwise objectionable' suggest Congress did not intend to immunize YouTube from liability for removing materials from its website simply because those materials pose a 'problem' for YouTube."); National Numismatic Certification, LLC. v. eBay, Inc., No. 6:08-cv-42-Orl-19GJK, 2008 WL 2704404, at *25 (M.D. Fla. July 8, 2008) ("It is difficult to accept, as eBay argues, that Congress intended the general term 'objectionable' to encompass an auction of potentially-counterfeit coins when the word is preceded by seven other words that describe pornography, graphic violence, obscenity, and harassment. When a general term follows specific terms, courts presume that the general term is limited by the preceding terms."); Goddard v. Google, Inc., No. C 08-2738JF(PVT), 2008 WL 5245490 (N.D. Cal. Dec. 17, 2008) (relying on National Numismatic to conclude that Google rules requiring various advertisers to "provide pricing and cancellation information regarding their services" "relate to business norms of fair play and transparency and are beyond the scope of § 230(c)(2)"); Google, Inc., v. MyTtriggers.com, 2011-2 Trade Cases ¶ 77,662 (Ohio Ct. Com. Pl.) ("The examples preceding the phrase 'otherwise objectionable' clearly demonstrate the policy behind the enactment of the statute and provide guidance as to what Congress intended to be 'objectionable' content."); Annemarie Bridy, Remediating Social Media: A Layer-Conscious Approach, 24 B.U. J. Sci. & Tech. L. 193, 209-10 (2018).

[13] Pub. L. No. 104-104 (1996).

[14] See id. sec. 501.

[15] Pub. L. No. 104-104 (1996) (emphasis added). We omit a procedural section, sec. 561.

[16] 521 U.S. 844 (1997).

[17] 47 U.S.C. § 532(h) (enacted 1984) (cable television); 47 U.S.C. § 223(a) (enacted 1968) (telephone calls).

[18] See Pub. L. 90-229 (1968) (enacting 47 U.S.C. § 223).

[19] 51 F.C.C.2d 418 (1975). For an alternative proposed common link, see Nicholas Conlon, Freedom to Filter Versus User Control: Limiting the Scope of S 230(c)(2) Immunity, U. Ill. J.L. Tech. & Pol'y, Spring 2014, at 105 ("courts should require that material be similar to the preceding terms in the respect that all of the preceding terms are characteristics that degrade the quality of the [interactive computer service] for users," whether because many parents view the described speech as making the service harmful for children, or because the described speech is "harass[ing]" and thus unduly intrudes on unwilling users).

[20] In the Matter of Violent Television Programming And Its Impact On Children, MB Docket No. 04-261, Report, (Apr. 25, 2007) at para. 5.

[21] 47 U.S.C. § 303c(c).

[22] Television Violence Report Card Act of 1996, S. Rep. 104-234, 104th Congress (1995-96).

[23] [Cite.]

[24] 438 U.S. 726, 745-46 (1978) (emphasis added).

[25] Fair Hous. Council of San Fernando Valley v. Roommates.Com, LLC, 521 F.3d 1157, 1170 (9th Cir. 2008) (discussing "Stratton Oakmont, the case Congress sought to reverse through passage of section 230″); Hassell v. Bird, 5 Cal. 5th 522, 532 (2018) (likewise).

[26] See 141 Cong. Rec. H8469 (daily ed. Aug. 4, 1995) (statement of Rep. Cox).

[27] Id.

[28] 141 Cong. Rec. H8470 (daily ed. Aug. 4, 1995) (statement of Rep. Wyden) (arguing that filtering technology is the best solution to protecting children from "smut and pornography"); id. (statement of Rep. Barton) ("There is no question that we are having an explosion of information on the emerging superhighway. Unfortunately part of that information is of a nature that we do not think would be suitable for our children to see on our PC screens in our homes."); id. (statement of Rep. Danner) ("I strongly support … address[ing] the problem of children having untraceable access through on-line computer services to inappropriate and obscene pornography materials available on the Internet"); id. (statement of Rep. White) ("I have got small children at home…. I want to be sure can protect them from the wrong influences on the Internet") (statement of Rep. White); id. (statement of Rep. Lofgren) (arguing against the Senate approach to restricting Internet pornography); id. (statement of Rep. Goodlatte) ("Congress has a responsibility to help encourage the private sector to protect our children from being exposed to obscene and indecent material on the Internet"); id. (statement of Rep. Markey) (supporting the amendment because it "dealt with the content concerns which the gentlemen from Oregon and California [Reps. Wyden and Cox] have raised," and arguing that it was superior to the Senate approach to restricting Internet pornography). The only representative who didn't discuss material that was seen as unsuitable for children was Rep. Fields, who simply congratulated his colleagues "for this fine work," id.

[29] Id. at 545-46 (paragraph break added). Justice Alito's concurrence in the judgment agreed on this score. Id. at 550. See also Circuit City Stores, Inc. v. Adams, 532 U.S. 105, 114 (2001) ("Construing the residual phrase to exclude all employment contracts fails to give independent effect to the statute's enumeration of the specific categories of workers which precedes it; there would be no need for Congress to use the phrases 'seamen' and 'railroad employees' if those same classes of workers were subsumed within the meaning of the 'engaged in … commerce' residual clause.").

[30] Some have argued that § 230(c)(1) itself protects platforms' rights to exclude whatever they wish, regardless of how § 230(c)(2) is interpreted. [Cite.] But we think that is not correct. Treating a platform as a common carrier or a place of public accommodation, or otherwise forbidding it from discriminating based on political viewpoint or other bases, isn't treating it as "the publisher or speaker" of others' information—indeed, it is the opposite, since publishers and speakers generally may exclude others' speech. See Edward Lee, Moderating Content Moderation: A Framework for Nonpartisanship in Online Governance, 70 Am. U. L. Rev. 101, pt. I.C (forthcoming 2021).

[31] Enigma Software Grp. USA, LLC v. Malwarebytes, Inc., 946 F.3d 1040, 1051-52 (9th Cir. 2019).

[32] [Cite.]

[33] [Cite.]

[34] [Cite.]