The Volokh Conspiracy

Mostly law professors | Sometimes contrarian | Often libertarian | Always independent

Free Speech

§ 230 and the Preserving Constitutionally Protected Speech Act

My testimony today before a House Subcommittee on Communications & Technology hearing on proposed revisions to § 230.

|

You can see the PDF of my testimony (and the other witnesses' testimony as well), but I thought I'd also blog the text; I commented separately on five different proposals, so I thought I'd break this down accordingly. As I noted, my plan was mostly to offer an evenhanded analysis of these proposals, focusing (in the interests of brevity) on possible nonobvious effects. I also included my personal views on some of the proposals, but I will try to keep them separate from the objective analysis.

[II.] Preserving Constitutionally Protected Speech Act

This bill contains several different provisions.

[A.] Enabling State Civil Rights Laws That Ban Political Discrimination

The bill would change § 230(c)(2) to provide (in proposed new § 230A(a)(2)) that,

No provider of an interactive computer service that is a covered company [basically, a large social media platform] shall be held liable on account of … any action voluntarily taken in good faith to restrict access to or availability of material

that is not constitutionally protected or that the provider has an objectively reasonable belief is obscene, lewd, lascivious, filthy, excessively violent, or harassing.

The current version of (c)(2), on the other hand, closes with:

that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable, whether or not such material is constitutionally protected.

To oversimplify, the bill would make clear that platforms have no federal immunity when they block constitutionally protected material that isn't sexual, violent, or harassing. States could then, if they choose, limit platforms' ability to remove posts and users based on the posts' and users' political ideas (or religious, scientific, and other ideas). The bill wouldn't itself ban such political discrimination, but it would clearly allow states to do so.[1]

Whether such bans on political discrimination by social media platforms are constitutional under the First Amendment, and whether they are a good idea, are difficult questions, which I canvass in a recent article.[2] But the bill would make clear that § 230 doesn't preclude such bans.

[B.] Requirement That Users "Knowingly and Willfully Select[] … Algorithm[s]" for Displaying Content

The bill would strip large platforms of immunity when they "utilize[] an algorithm to amplify, promote, or suggest content to a user unless a user knowingly and willfully selects an algorithm to display such content" (proposed § 230A(c)(3)). Yet everything that computers do, they do via "algorithm[s]."

This means that any platform that amplifies, promotes, or suggests content to a user will have to make sure that the user "knowingly and willfully selects" that platform's "algorithm." This might simply mean that the platform will have to prompt each of its users with a "Click here to select our algorithm for suggesting material to you," and refrain from "amplify[ing], promot[ing], or suggest[ing]" any content to a user until the user clicks. If so, then that should be easy enough for the platform to do—though it's hard to see how it would help anyone.

On the other hand, if such a click isn't enough to count as "a user knowingly and willfully select[ing] an algorithm," then it's hard to know what platforms could do by way of suggesting content. Would they have to provide a choice of at least two different algorithms, so the user's action counts as truly "select[ing]"? Would they have to explain in detail each of the algorithms, so that it counts as "knowingly and willfully select[ing]"? Would they have to do something else? And what benefit would that provide to the user? It's hard to know given the current language.

[C.] Requirement That Platforms Provide Explanations and Appeals in Case of Removal

The bill would also provide (sec. 201, emphasis added) that

Each covered company shall implement and maintain reasonable and user-friendly appeals processes for decisions about content on such covered company's platforms….

For any content a covered company edits, alters, blocks, or removes, the covered company shall— …

clearly state why such content was edited, altered, blocked, or removed, including by citing the specific provisions of such covered company's content policies on which the decision was based ….

Sec. 201 seems to require only transparency and an appeal process, without any substantive criteria for what platforms may or may not remove; in that respect, these requirements would presumably be quite limited in scope. But the bill doesn't explain what counts as "reasonable" appeals, "user-friendly" appeals, or "clearly stat[ing]." For instance, say a platform says "we removed the material because it was pornographic / hateful / misleading / supportive of violence." Is that clear enough, or would the platform have to provide more details on where it draws the line between pornography and art? Would the platform have to explain why it views a statement as "hateful" or "supportive of violence," when the statement also has other possible meanings? Would the platform have to explain why it viewed certain controversial material as "misleading"?

Likewise, the bill states that any appeal must "provide an opportunity for [the] user to present reasons why the covered company's action should not have been taken, including demonstrating inconsistent application of such company's specific content policy at issue." Would the platform then need to "clearly state" why it views this material as deserving removal when it didn't remove past material, as to which the rules were supposedly "inconsistent[ly] appli[ed]"? How much expense, litigation, or deterrence to removal the proposal would yield depends heavily on how terms such as "clearly state" end up being interpreted.

The provisions would apparently be enforced only by the FTC (sec. 203(a)) or by state attorneys general or other executive officials (sec. 203(b)), and not by private litigants. But, as noted in Part II.A, the bill would free states to (1) ban political discrimination by social media platforms and to (2) let private litigants sue over such discrimination. If some states do that, then the transparency requirements would help the private litigants marshal evidence that they were indeed discriminated against based on their political views.

[D.] "Conservative"/"Liberal" Accounts

The provision in sec. 202(a)(4)-(5) requiring that platforms disclose "the number of [content enforcement] decisions related to conservative content and conservative accounts" and "to liberal content and liberal accounts" are likely unconstitutionally vague. There is no established definition of "conservative" and "liberal," and it's hard to imagine how such a definition could be developed in a way that is clear enough for a legal rule.[3]

[1] It's possible that even the existing 47 U.S.C. § 230(c) doesn't stop states from banning platforms from removing posts based on the posts' political views, see Adam Candeub & Eugene Volokh, Interpreting 47 U.S.C. § 230(c)(2), 1 J. Free Speech L. 175 (2021), https://www.journaloffreespeechlaw.org/candeubvolokh.pdf. But right now that's just a possibility, on which courts are divided.

[2] Eugene Volokh, Treating Social Media Platforms as Common Carriers?, 1 J. Free Speech L. 377 (2011), http://www.law.ucla.edu/volokh/pubaccom.pdf.

[3] Cf. Hynes v. Mayor & Council of Oradell, 425 U.S. 610 (1976) (striking down as unconstitutionally vague a requirement that door-to-door political solicitors register with the city before soliciting "for a Federal, State, County or Municipal political . . . . cause," because "it is not clear what is meant by" that phrase).