The Volokh Conspiracy
Mostly law professors | Sometimes contrarian | Often libertarian | Always independent
Social Media Platforms' Recommendation and Conversation Functions
Another excerpt from the First Amendment section of my Social Media as Common Carriers? article (see also this thread); recall that the key First Amendment arguments are in this post, which relies on the PruneYard, Turner, and Rumsfeld precedents, and in this one, which explains why Miami Herald, Hurley, and the various other "common theme" precedents don't apply.
[* * *]
[A.] The General Unconstitutionality of Compelled Recommendations
Social media platforms, of course, provide more than just a way for readers to read speakers whom they choose to read (whether by going to a page or subscribing to a feed). They also, in effect, recommend new material to readers, for instance under "What's happening" or "Who to follow" in the right sidebar on Twitter; under "People You May Know" in Facebook; on the front page of YouTube; and much more. This is even more clear for Facebook's news feed or Google News.[255]
Now this, I think, is indeed the platforms' own speech, and the government may not tell the platforms how to compose it.[256] If they want to recommend "Who to follow" based on who express views that the platforms like, they have to be free to do that—just as the shopping malls in PruneYard, the cable operators in Turner, and the universities in Rumsfeld remained free to choose which speakers or cable channels or recruiters to specially promote. (Likewise, the publishers' own speech in yellow pages directories is protected by the First Amendment, even if the directories are printed under contract with common-carrier phone companies.[257]) And that remains so even if the decisions are made in part by algorithms; I have made that argument as to search engine search results, and I think the same is true for social media platform recommendations.[258]
[B.] The Conversation Function
The closest call, when it comes to the compelled speech doctrine, has to do with platforms' decisions involved in managing conversations, for instance Facebook's, Twitter's, or YouTube's choices about which comments to allow on other people's posts or pages.
This sort of management is often protected by the First Amendment. In Hurley, the parade case, the Court noted that,
Rather like a composer, the Council selects the expressive units of the parade from potential participants, and though the score may not produce a particularized message, each contingent's expression in the Council's eyes comports with what merits celebration on that day.[259]
The same is true for many offline conversations. Teachers shape conversations not just through the questions they ask, but through the rules they set for student participation, and through occasionally declining to call on a student who had violated the rules (or had just talked too much). Conference moderators shape Q & A exchanges by cutting off questioners who are rude or who go on too long or who orate instead of asking questions. They may not be trying to promote particular messages of their own, but "a narrow, succinctly articulable message is not a condition of constitutional protection."[260]
My Twitter feed or my Facebook page are pretty much "individual, unrelated segments that happen to be transmitted together for individual selection by members of the audience."[261] But my comments on your Twitter feed or Facebook page don't just "happen to be transmitted together" with others' comments on the same material; they are consumed by readers (including both you and your other readers) together, as part of a conversation. Platform editorial judgments can help influence whether it's a polite conversation or a rude one—and if the conversation is polite, that will usually encourage more people to participate in it.
Barring platforms from editing will thus "interfere[] with [the platform's] desired message," by "alter[ing] the expressive content" of the conversations that they are seeking to create.[262] A curated conversation will no longer be a form of speech that the platform can legally provide.
Now of course some degree of conservation can and does take place even among separate feeds—just as the speech in PruneYard or Rumsfeld could have created a conversation. Passersby who receive a leaflet or who are asked to sign a petition could start a discussion or an argument with the speaker, and others could join in. Indeed, such speech could lead to organized counterspeech.
Likewise, as noted above, military recruiting on university campuses led to plenty of "responsive speech":
[S]peech with which the law schools disagree [has] resulted in, according to the record, hundreds (if not thousands) of instances of responsive speech by members of the law school communities (administrators, faculty, and students), including various broadcast e-mails by law school administrators to their communities, posters in protest of military recruiter visits, and open fora held to "ameliorate" the effects of forced on-campus speech by military recruiters.[263]
Nonetheless, the Court has sharply distinguished the hosting cases, such as PruneYard and Rumsfeld, from the coherent speech product cases, such as Hurley. Platforms' decisions not to host speech strike me as quite close to the decisions not to host in PruneYard and Rumsfeld. Platforms' decisions about what to allow in a comment thread seem closer to parade organizers' decisions about what to allow in a parade.
[255] See Langvardt, supra note 23, at 7.
[256] See supra Part II.F. 1. This is one of the reasons a federal court struck down Florida's social media access mandates: "[T]he statutes compel the platforms to change their own speech in other respects, including, for example, by dictating how the platforms may arrange speech on their sites. This is a far greater burden on the platforms' own speech than was involved in FAIR or PruneYard." No. 4:21CV220-RH-MAF, 2021 WL 2690876, *9 (N.D. Fla. June 30, 2021).
[257] Dex Media West, Inc. v. City of Seattle, 696 F.3d 952, 957 (9th Cir. 2012).
[258] Eugene Volokh & Donald M. Falk, First Amendment Protection for Search Engine Results, 8 J.L. Econ. & Pol. 883 (2012). I cowrote that article as a paper commissioned by Google, but I endorse those views wearing my academic hat and not just wearing my lawyer hat. But see Langvardt, supra note 23, at 6 (suggesting that even the recommendation function might be properly regulated); James Grimmelmann, Speech Engines, 98 Minn. L. Rev. 868, 950 (2014) (arguing for a model that focuses on whether the search engines is loyally advising its users).
[259] Hurley, 515 U.S. at 574.
[260] Id. at 569; see generally Volokh, Freedom of Speech in Cyberspace from the Listener's Perspective, supra note 89, at 385–98.
[261] Hurley, 515 U.S. at 576.
[262] Rumsfeld, 547 U.S. at 63–64.
[263] 390 F.3d 219, 239 (3d Cir. 2004), rev'd, 547 U.S. 47 (2006).
Show Comments (33)