The Volokh Conspiracy
Mostly law professors | Sometimes contrarian | Often libertarian | Always independent
Can the State Regulate Content Moderation?
It's hard to argue that providing a pipe constitutes a speech act.
(This post is part of a five-part series on regulating online content moderation.)
Before we dream up policy recommendations for how the state might intervene when private content moderation runs amok, we should probably figure out whether the state can intervene at all. The First Amendment limits the power of governments to regulate online speech, either directly or indirectly by regulating intermediaries that host others' speech. Since most online content constitutes speech, does the First Amendment completely bar the state from regulating in this space?
That indeed is the view most academics seem to take. In support of that view—let's call it the "strong editorial rights" position—adherents often point to two cases. First, in Miami Herald v. Tornillo, the Court unanimously struck down a Florida law that required newspapers to print responses from political candidates who were criticized within their pages. Noting that a newspaper is more than a "passive receptacle for news, comment, and advertising," the Court explained that the choice of what "material [should] go into a newspaper … constitute[s] the exercise of editorial control and judgment." Interfering with that judgment, therefore, violated the First Amendment's guarantee of a free press. Nor could it be assumed that newspapers retained their editorial judgment just because the right-of-reply statute required them merely to append a small amount of additional material. Since newspapers offer only a limited number of physical pages, printing mandatory rebuttals would "tak[e] up space that could be devoted to other material the newspaper may have preferred to print." Let's call this the "space constraints" principle.
Second, in Hurley v. Irish-Am. Gay, Lesbian & Bisexual Group, the Court held that private organizers could not be compelled to include groups or messages in a public parade that they disapproved of since doing so would alter the overall message the organizers wished to convey. Eugene refers to this as the "coherent speech product" doctrine. If an entity hosts or distributes a collection of third-party content that, together, conveys an overall theme or message, then that hosting or distribution itself becomes a speech act, and the entity becomes a speech participant. The state, therefore, cannot compel the entity to host or distribute additional content if doing so would alter the overall message the entity seeks to express.
The strong editorial rights camp believes that forced carriage laws aimed at social media companies are constitutionally infirm for the same reasons. Social media companies exercise "editorial control and judgment" in deciding which user content to allow and which content to remove. Moreover, in deciding to permit certain viewpoints on their platforms (e.g., "trans women are women") while proscribing other viewpoints (e.g., "A man cannot get pregnant"), these companies are expressing an overall theme or message (e.g., trans pride or pro-LGBT sentiment) and are therefore creating a coherent speech product. Requiring them to permit trans-critical speech would effectively prevent them from communicating their preferred message and, thus, would violate their First Amendment rights.
By contrast, critics of the strong editorial rights position—let's call them the "weak editorial rights" camp—believe that certain forms of forced carriage regulation may indeed be constitutional. It isn't that these critics necessarily disagree with Tornillo or Hurley, but they aren't convinced that those precedents apply to social media. Eugene's article, Treating Social Media Platforms Like Common Carriers?, makes the skeptics' case in far greater detail, so I'll highlight just a couple distinctions here.
First, unlike a newspaper, which might be able to publish a mandatory rebuttal only by dropping another piece it would prefer to print, social media lacks any comparable space constraints. With the possible exception of spam or bot-generated content (which providers would presumably block for viewpoint-neutral reasons), laws requiring social media companies to host all lawfully expressed viewpoints would not force providers to make difficult decisions about which other posts to cut in order to make everything fit.
Second, because of their capacity to host an effectively infinite amount of user content, social media companies don't appear to put out anything approximating a coherent speech product. Not only would it be impossible to view or consume the entire universe of user content made available on, say, Reddit in order to discern an overall theme or message, but it's hard to argue that any overall theme or message currently exists on such platforms. Sure, providers may clearly prohibit certain viewpoints while permitting others, but does a collection of unrelated "can't say this" rules communicate any particular message? While platforms could argue that they present overarching themes like "decency, "dignity," or simply being on the "right side of history," skeptics say such themes are far too diffuse to constitute a concrete message under Hurley.
The weak editorial rights camp also points to Pruneyard Shopping Center v. Robins, in which the Supreme Court upheld an interpretation of the California Constitution requiring shopping centers to allow members of the public to distribute leaflets and gather signatures on their property. Although a shopping center might have disagreed with the messages it was effectively forced to host, that disagreement did not constitute compelled speech for First Amendment purposes. Nor was the shopping center's desire to provide a generically appealing environment for patrons (e.g., a non-political or family-friendly space) sufficiently concrete to constitute an overall message or speech act.
Likewise, in Turner Broad. Sys., Inc. v. FCC ("Turner II"), the Supreme Court upheld an FCC regulation requiring cable television providers to carry local broadcast television channels, even though such providers might not agree with the content of those channels. And in Rumsfeld v. FAIR, the Court found no First Amendment violation in the Solomon Amendment, which required universities to host military recruiters despite those universities' moral objection to the military's then-existing policy on homosexuals' serving in the armed forces.
Taken together, and as cabined by Tornillo and Hurley, these cases can be said to stand for the proposition that the state can force a private company to carry third-party speech it dislikes as long as (1) the speech medium does not qualify as a coherent speech product, (2) the carried speech is not likely to be attributed to the regulated provider, and (3) the provider is not prevented from disavowing or distancing itself from the speech it is forced to carry. Given that today's large social media platforms arguably meet each of these requirements, skeptics of the strong editorial rights position believe that the state could constitutionally compel such platforms to moderate user content in a viewpoint-neutral manner, perhaps even through laws like those enacted by Texas and Florida.
So, which side is right in this debate? Unfortunately, the current caselaw doesn't unambiguously boil down to either position. Social media companies would seem to have some editorial interests in what content they choose to allow on their platforms (Tornillo) and might have some expressive interests in permitting only user speech that corresponds with their values (Hurley). But I agree with Eugene that these companies' ad hoc, automated, and ex post removal of only a fraction of content that violates their acceptable use policies seems like a far cry from the kind of careful, ex ante curation that a newspaper or a parade organizer typically undertakes. Likewise, Pruneyard, Turner, and Rumsfeld do indeed establish that the state may sometimes require a provider to host another party's speech. But I agree with Ashutosh Bhagwat that Pruneyard and Rumsfeld did not involve communications platforms, and although Turner did, the Court ultimately upheld the FCC regulations only after concluding that the regulated cable companies had certain editorial rights.
In the end, I think Jane Bambauer has the best take on the issue when she says that "there are no close analogies in First Amendment precedent for internet platforms." Instead, "online platforms are their own free speech beast." And even if there are decent arguments for applying the Pruneyard line of cases to social media, it seems more likely than not that the Court will ultimately side with the strong editorial rights camp if and when it reviews the Texas and Florida laws. Kavanaugh showed his hand when he argued that the FCC's net neutrality rules violated ISPs' First Amendment rights. And Breyer, who was famously warm to forced carriage, has yielded his seat to Jackson, who I think is unlikely to vote to allow red states to force social media companies to carry racist, sexist, or homophobic speech.
Fortunately, for my purposes, I don't need to take a position on whether social media companies possess strong or weak editorial rights over their users' content. When it comes to viewpoint foreclosure—the ability to boot a person, viewpoint, or group from the internet entirely—the entities that have the power to effect such a result operate in a very different space than social media companies or any other website operators, for that matter.
As I'll explain in more detail in tomorrow's post, viewpoint foreclosure can occur only when those entities that operate the internet's core resources—the resources that make internet speech possible in the first place—start to delve into content moderation. Those resources include IP addresses, domain names, and the actual wires and airwaves that carry internet communications. Such providers operate at a far greater distance from actual expression on the internet, functioning more like phonebooks, telephone switches, and conduits. Neither courts nor the public are likely to attribute an internet troll's racist screed to the operator of subsea cables that carry his packets across the ocean any more than they would attribute the views expressed in an Antifa pamphlet to the utility companies that provide water and electricity to the premises where the pamphlet was printed. And it would seem that the operator of the .com registry has as much editorial interest in suspending a domain name associated with a controversial website as AT&T has in revoking a phone number used by a pro-Israel (or pro-Palestine) interest group. Which is to say, not much.
Imposing forced carriage on the entities that operate the internet's core functions would seem to satisfy both the strong editorial rights camp and the weak editorial rights camp. It would satisfy the weak rights camp because the greater includes the lesser. If the state can regulate websites like social media platforms, which operate close to actual user expression, then it can obviously regulate core intermediaries, which operate much further from that expression. And it would likely satisfy the strong rights camp because even those who take a broad view of editorial discretion draw a line when it comes to ISPs. Their embrace of net neutrality rules, which prevent ISPs from blocking access to websites or services they dislike, shows they don't believe that editorial rights should extend to all communications media, especially not to those that operate as mere "passive receptacle[s]."
Thus, whether or not the state can regulate content moderation where most moderation happens—namely, on websites like social media platforms—the First Amendment likely doesn't prevent the state from regulating "content moderation" at the hands of the internet's core intermediaries. Administering foundational resources like IP addresses, domain names, and fiber lines seems far more akin to publishing a phonebook, providing a utility, or maintaining a pipe. And it's hard to argue that providing a pipe constitutes a speech act.
Editor's Note: We invite comments and request that they be civil and on-topic. We do not moderate or assume any responsibility for comments, which are owned by the readers who post them. Comments do not represent the views of Reason.com or Reason Foundation. We reserve the right to delete any comment for any reason at any time. Comments may only be edited within 5 minutes of posting. Report abuses.
Please
to post comments
They should create a section 230 for newspapers and cable news, let them get used to living lawsuit free, then threaten to take it away unless they censor things Congress wants, like certain statements by political opponents before an election.
Given there seems to be no problem with this tried-and-true model.
I know you're convinced this is what happened, but your evidence is that you're convinced it happened, not that, you know, anyone mentioned it was a thing.
You have the persistence of an academic but none of the rigor.
This take is especially dumb since there's people on both sides of the political divide threatening 230 immunity with basically diametrically opposed speech outcomes suggested. Why are we to believe that the platforms are being coerced by the government when they end up doing with Democrats want but not if they do what Republicans want?
Plus any company worth it's salt knows how little you need to listen to a single member's bloviations.
The professional lobbyists know the true lay of the land.
"Neither courts nor the public are likely to attribute an internet troll's racist screed to the operator of subsea cables that carry his packets across the ocean any more than they would attribute the views expressed in an Antifa pamphlet to the utility companies that provide water and electricity to the premises where the pamphlet was printed."
This intuition that neither subsea cable operators nor electrical utility companies should care what people ultimately do with their services is correct for several technical reasons.
At the level where data or electrons are flowing through wires, the end-use is completely unknown (and irrelevant) to the operator. The undersea cable operator sees only light in pulses. The utility sees only electromagnetic energy in waves. Neither operator knows how these are going to be used.
At the level where the transported data or electrons get routed to their final destination, the end-use is also completely unknown to the operator. The operator only sees the addresses of senders and receivers, and routes the data or the electricity accordingly.
It is only at the very last level that the data is assembled into intelligible information and the electricity is used to do a human-directed task. That is the only point at which data or electricity usage becomes "content" -- and thus can be moderated.
So what makes the collusion of Apple (app host), AWS (platform provider), CloudFlare (firewall service), and LACNIC (Internet address registrar) to collectively ban Parler so dangerous is they had to corrupt the very technical architecture of the Internet to do it. They literally had to insert malevolent code and practices into their own operations to subvert the normal operations.
All to silence just one voice.
As a very long-time AWS customer, when I heard about this, I immediately sent off a complaint to Amazon. My question was simple: should I be worried about being banned, too? Maybe some of my content will one day no longer be acceptable. Will they strip my access to my decades of data? Will they cut off my carefully developed customer relationships? Will they, in short, put me out of business?
What I got in response was a form letter assuring me that they "take things seriously." Well. Yes. We all are taking things seriously.
I absolutely would like to see some legal protections around the networked components of the Internet. Your typical social media site is very likely not a common carrier, but your ISP sure is. As is your domain name registrar, IP address admin, hosting platform, and security firewall.
And to this list I would most definitely add the entire financing network for online payments, but that is a subject that deserves its own, separate discussion.
Whatever one thinks of Apple's actions wrt Parler, they did not need to "insert malevolent code and practices" and did not "subvert [] normal operations." Apple's App Store has always been heavily curated by Apple.
That covers Apple, but what about AWS?
I don't know what AWS's usual policies are, but they certainly didn't have to insert malevolent code anywhere to take Parler (or anyone else) down. They just need to turn off their account, just like the electricity company wouldn't need to put magic electron viruses in your dishwasher to turn it off when instead they can just turn off power to your house (to keep with the utility analogy).
I do tend to agree that the lower-level service providers can (and maybe should) be treated differently than social media platforms. I'm not even sure an entity like AWS would be so mad if they were forced to host something like Parler if they could tell people complaining to them about it "not our fault--the government says we have no choice". (This is true of tech companies and content moderation generally--they'd rather not be doing it, but get forced/pressured into it for various reasons.)
I was a bit surprised to hear that LACNIC might be taking sides in the Parler debate because especially at the level of doling out IP addresses, I agree the administration should be content neutral. This article has a pretty interesting (and neutral-seeming) explanation of the situation:
https://krebsonsecurity.com/2021/01/ddos-guard-to-forfeit-internet-space-occupied-by-parler/
tl;dr: There's a guy trying to make trouble for these conservative sites, looking for rules violations that can be used against them. In the particular case of LACNIC, it seems like their DDOS provider was just pretending to be in Latin America to get in order to get IP address space from LACNIC illegitimately, so they decided to take it back when he pointed this out.
“Malevolent code and practices” was my charge, and I still stand by that even in Apple’s case. Parler was the top app in their app store. They refused to let it continue to gain customers because they disagreed with its politics. You will not find “must agree with Apple’s politics” anywhere in the requirements for an app to be hosted in their Store. They had to break their own rules to take them out.
Assuming the word "and" means the same thing to you as everyone else, there must surely be some examples of malevolent code used by the companies you listed to take down Parler as well?
That's factually wrong in multiple ways. No, Parler was not the top app in their App Store. What a loony comment. Maybe for one short period it was the top social media app? In fact, they did not disagree with Parler's "politics"; they disagreed with Parler's practices. (And in fact you will indeed find "must agree with Apple's politics" in there. They have rules requiring, inter alia, content moderation for objectionable content. (There are no porn apps in Apple's App Store.) They did not "break their own rules," which don't promise anyone placement in the App Store.)
Never wrestle over the precise meanings of words with a lawyer or a theologian if you ain’t one.
I withdraw my complaint re:Apple. You have the better argument.
Another disappointing foray from the baby professor.
One of the core problems with Eugene’s analysis of social media platforms – and so, replicated here, since you’ve added nothing to the discussion other than another repetitious walk through the First Amendment casebook – is that it doesn’t take social media platform products as they are.
What do you see, when you go to your Facebook or Twitter feeds? It’s not just the content uploaded by people you’ve followed, arranged in chronological order, subject to some light filtration pursuant to content moderation policies. What you see is a selection of uploaded content, arranged based on your interaction with the site; recommended posts based on your history and engagement, from people you don’t yet follow; ads from vendors you may or may not have used before or know about; cross-posts from other affiliated services; if you’re on Twitter, you’ll see blue check fascists promoted into your feed; and so on. It’s a “coherent speech product,” not expressing a single view, but delivering curated content designed to engage you and keep you engaged. Failing to acknowledge this is, if not actually fatal, a serious oversight in your argument.
At the same time, the distinction of social media platform content from physical newspapers both plays up what newspapers provide and ignores modern innovation. A newspaper’s opinion pages actually do not present a “coherent speech product” that can be taken to express a single, coherent editorial viewpoint. Apart from the traditional editorial, these sections typically include diverse viewpoints from various contributors expressing their own personal views. The WaPo ed board, for instance, can in no reasonable sense be said to approve of or endorse the garbage it prints from Hugh Hewitt.
The emphasis on the “space constraints” principle from Tornillo also requires serious attention before it’s simplistically used to distinguish social media platforms from newspapers. For, if it is true that social media platforms have “unlimited” space to publish viewpoints from anyone they like, it is no less true for modern newspapers with online portals and online-only media organizations. If “space constraints” have continued relevance in modern online media – like Reason – part of that argument would seem to require acknowledging that modern online media have “space constraints” on screens and eyeballs, and in users’ attention spans. But if Tornillo continues to apply online media presences for this reason, it must similarly apply to social media platforms, which are no less actively managing screen space and user attention.
Social media platforms are not mere “pipes,” and this amateurish take achieves the admirable feat of replicating Eugene’s errors without even bothering to add anything original to them.
As for more fundamental service providers in the internet infrastructure – one can agree that their “free speech” interests in IP registries and cloud computing server space is significantly less compelling than is the case for social media platforms. But that point is something of a red herring, since the question we should be asking is not, “Does the First Amendment prohibit the state from requiring these providers to enable speech with which they disagree,” but rather, “On what basis can we validly compel these providers to engage in business with speakers whose message they find abhorrent?” We do not start, for instance, the discussion on public accommodations law by defining into existence a “right to hospitality” and then limit hotel owners’ rights accordingly – while arguing there is no First Amendment right not to “associate” with certain potential customers. We identify a distinct public good that we must argue outweighs private property owners’ interests in discriminating, and we make the argument from there.
These are some excellent points and you've saved me from having to make them. I want to add two observations, though, one of which synthesizes two of yours and one in addition:
First, it's precisely because the platforms do not just provide an unsorted list of all possible content that the "space constraints" limitation can't be ignored on social media platforms. Sure, if users were just presented with a randomly selected (or even chronologically ordered) set of all possible posts, then there would be unlimited space. But this would also not be very helpful to users as there would be vastly more content than could possibly be consumed. So the platforms are necessarily curating what content should go into the relatively higher results, and that space is very limited. Indeed, much of the controversy over the platforms' content moderation policies comes not from them removing content from the platform entirely, but changing its ranking, adding text to indicate that it may not be correct, or demonetizing it (more about this in a minute). None of these actions matches a world in which we are just debating whether or not to include a piece of content in an unlimited catalog, and even if they were forced to include the content we'd immediately end up in discussion about ranking because being included in the index but not ranked highly is not going to make the people complaining about the platforms any happier.
Second, and you've hinted at it with your mention of advertising in the feed, but the reason that platforms moderate content in the first place is because of their ad-based revenue models. There are many types of opinions that advertisers simply don't want their ads to show up next to, and therefore the platforms are moderating content in order to make their business work. Viewed through this lens, there's absolutely an expressive and opinionated perspective being made through these content moderation decisions, and that is to create a set of content that fits an overall model/community that they think comports with their advertiser's desires.
Wrong on both counts.
Users, not the medium, ultimately decide the type of content they want to see. The medium uses an algorithm to make that happen. By employing an algorithm to promote what a user sees, it is not deleting or hiding the unpromoted content. Google is not "curating" the internet by putting certain results on page one and others on page eight.
Second, the idea that advertisers care about what content they show up near on a medium where the content is exclusively user-specific is absurd. It's not as if the ad is appearing on a website with a dedicated message; it's showing up next to content that the individual user specifically sought out. It makes no sense, for instance, that a person who genuinely decides to search for racist content would be mad at a brand because its advertisement appeared next to content they were interested in. And even if this idea of brand protection made logical sense, the solution would not be too ban all "bad" content entirely, but to allow advertisers to decide what content they appear next to by taking advantage of the same algorithms that highlight that content to begin with.
My main point was that space on page 1 is limited even if it's possible to have seven billion pages. I'm actually not sure what yours is.
LOL. Tell me you've never worked in advertising without telling me you've never worked in advertising.
Twitter's loss of 50% of its advertising revenue since Elon took over is a pretty obvious data point to show just how incorrect you are. Or just read up on this long list of advertiser boycotts against content they don't want to be associated with:
https://www.forbes.com/sites/bradadgate/2020/06/17/do-advertiser-boycotts-work-it-depends/
You said that social media companies are curating their front pages/top results. That's untrue because bothe the searches and the results are being selected and created by users. An algorithm matches them. What can realistically be read on a screen at a given time is nothing like space constraints in a physical newspaper.
Twitter's advertising tanked because the service to advertisers turned to shit. I know this because it is literally impossible to see what ads are associated with anything without looking up that thing yourself. I know this because even the most dense person understands that online advertising on social media sites is tied to the user, not the content, because the content on either side of an ad can be dramatically different, depending on what content I decide to look for. I also know this because I have bought consumer goods for decades and I have not once stopped to consider whether the values statement on the Pop Tarts website is sufficie tly woke/unwoke, anymore than I have worried about whether my plumber supported Trump or Biden in the last election. It's all 100 percent performative nonsense by advertisers, driven in no small part by their too-online marketing employees in their 20s and 30s.
Wrong enough to qualify as stupid.
Recognizing that Professor Nugent is focusing this post on core internet infrastructure companies, not social media companies, I also agree with Professor Nugent’s basic argument as applied to social media companies.
Social media companies are primarily providing a technology platform for facilitating user communications, and only secondarily conducting a publishing business on that platform (if at all).
I agree that government is entitled to regulate this core business differently from any publishing activities they conduct on that platform. And just as government is entitled to break up other vertically integrated economic activities, and has done so in the past, it can provide by law, if it wants, that any publishing activities must be provided by an entity separate from and independent of the entity operating the communications technology platform. Government can cut off and foreclose any claims that a large communications technology company can somehow obtain the protections due publishing businesses for its entire operation merely by also running a publishing business on top of its platform, simply by prohibiting such a vertically integrated combination.
Not only can government regulate de facto common carriers as de jure common carriers if it wants to, it can also prohibit common carriers from vertically integrating themselves with businesses thst would make regulation as a common carrier problematic.
Repeating it won't make it true.
Exactly. Calling it a newspaper doesn’t make it so.
It is solely the users’ purposes and perspective, not the company’s, that determines what it legally is. And users use it primarily, overwhelmingly, as a communications platform to facilitate their communications with other users, rarely if at all as a newspaper.
This part of the original post resonates with me:
So just like social media platforms aren't newspapers, they're also not telephone companies. They're something different and need to be taken on their own terms. If you're trying to treat them primarily as something that existed forty years ago, you're doing it wrong.
ReaderY, no. Internet platforms practice the same business models as newspapers and broadcasters, except they forego most news gathering, and surveil their readers to better curate ad sales.
Consider also that the success of internet platforms has come at the expense of only one class of businesses. Platforms have thrived on the basis of a government-provided competitive privilege, which empowered platforms to divert revenue previously available in the marketplace to support conventional publishers. Note that common carriers have not suffered comparably.
It is self-evident that the industry in which a business practices its activities will be the industry in which its competitive effects will be felt. For platforms, that has overwhelmingly been the conventional publishing industry, both print and broadcast. Platforms are not only publishers, they number among them the largest and most successful publishers the world has ever seen.
In a previous comment I bullet-pointed specific points of comparison between platforms and conventional publishers. You ignored that list of specifics, and now simply contradict the evidence provided with ipse dixit negation.
Also, your assertions about users' purposes would be incoherent even if you had accurately understood the analogy you imply. But you don't understand it. When you characterize platform content contributors as, "users," you neglect to notice that almost none of them pays even a penny to support the platform. They are not users, they are the product for sale, to advertisers, which are the true users of the platforms.
No, they are users. Because they use the service. Whether they pay has nothing to do with that.
My argument is that the business models internet platforms practice are completely irrelevant to their status. All that matters is user practice.
I would draw an analogy to trademark law. Once in the market, a mark is out of the company’s hands, what it intended to happen is irrelevant , and cases get decided by consumer surveys. If by a company’s dedicated and costly efforts a mark has become so famous people use it as a generic for the category and not to identify the source company, the source company is just out of luck.
So here. Legislators and legal theorists should look at how the market, consumers use and how they regard the service, not how the business makes its money, not its business model, not what the business intends or intended.
The fundamental concern of any business regulation, is a business’ effect on the public . And it is the public, not the business, that determines what that effect is. It is the public, not the business, that should be heard from in reaching that decision.
My argument is that the business models internet platforms practice are completely irrelevant to their status. All that matters is user practice.
That is as succinct an expression of internet utopianism as anyone is likely to come up with. Do you suppose that any court would ever rule that the business models of newspapers are irrelevant to their status as publishers protected by 1A press freedom—that all that mattered was reader demands for content tailored to their preferences?
That, and the rest of your bushwah, is just you thumbing your nose at 1A press freedom. Internet utopians always want social media platforms to be something completely new and unprecedented, so there are no impediments to get government to step in and regulate expressive content, and make it come out according to the utopians' own (competing) preferences.
No, that's not even a little bit true.
Nor is that. (Though it's harder to address because we're not talking about an "it" here, but a vague category of "its.")
See my comment above. Business regulation exists for the benefit of the public, and is primarily concerned with the business’ effect on the public. To determine a business’ effect on the public, government should primarily look to the public, not the business, for its information.
Of course the lawyers for the business are going to say otherwise.
And if you say this isn’t true, government regulation exists for the benefit of the business regulated and not the public, government should look to the business concerned to see how things should be perceived, at least you’ll have honestly articulated a consistent position.
All regulation at least theoretically exists for the benefit of the public. That's an uninteresting observation and has nothing to do with what regulation is permissible. If the public¹ decides that print newspapers are primarily useful as flyswatters, that would not allow the government to regulate the content of newspapers.
What you keep ignoring (well, one of many things) is that we are talking about first amendment issues. Whether the government can regulate, say, Uber as a common carrier² poses an entirely different question than whether it can regulate Facebook as one.
¹And of course "public" is a euphemism in this discussion; you really mean the government.
²Not picking on them; just trying to think of a prominent online service that isn't about speech.
ReaderY, congratulations. You have induced Nieporent to offer you a polite and informative response. You should make the most of such a rare gift, and pay attention to what he is telling you.
Unless I missed one, this post considers the thinking of three law professors -- Eugene Volokh, Ashutosh Bhagwat, and Jane Bambauer.
What might those three have in common?
Carry on . . .
McCarthy much? You made a list, congratulations!
Are you now, or have you ever been, a member of the Federalist Society?
I know you can hear yourself, because you're a partisan bad faith troll. Again, you remain free not to read this blog if such associations bother you. Can't for the life of me understand why people who ostensibly believe in the preciousness of American democracy (no insurrection coups!) object so strongly to others exercising their fundamental liberties. Including what particularly others choose to say.
Nothing. Are you too stupid to know that being listed just means that they appeared at a FedSoc event, not that they are FedSoc members or supporters? Unlike (e.g.) ACS, FedSoc events always have liberal participants for balance.
I attend Federalist Society events and periodically encounter a speaker at such an event who is not a conservative, perhaps even far from it.
In this context, referring to "a Federalist Society event" is silly. For these three professors, Federalist Society membership seems a part-time job. Also, someone who contributes Federalist Society "commentary" -- next tab over, after events -- is not someone invited to provide a mainstream counterpoint to movement conservatism.
The three relevant professors are certified Federalist Society wingnuts. Some people who have appeared at "a Federalist Society event" do not deserve to be labeled as clingers because they are not hardcore right-wingers. But in this context that point is irrelevant and railing about it is unpersuasive and worthless.
This fledgling professor, surveying the academic terrain, covered the gamut from wingnut to wingnuttier. As was predictable, because he chose to express his views at a white, male, disaffected right-wing website.
Orin Kerr, Ilya Somin and Will Baude are all listed in the Fed Society's Commentary search results. Not exactly mouthpieces for movement conservatism. Curious, I checked further, and so are Larry Tribe, Neil Katyal, and Erwin Chemerinsky. That denotation doesn’t seem to mean what you think it does.
Profs. Kerr and Baude are committed right-wingers or they would no longer be (1) associated with this blog and (2) silent about its disgusting attributes. Somin is a strange case, but I agree he is no movement conservative. He seems more a drifting misfit searching for a home.
The others you mention are engaged by the Federalist Society to provide mainstream counterpoint. I doubt they are asked to provide "commentary" contributions to the Federalist Society unless in a strange, one-off, against-the-grain circumstance.
Volokh, Bhagwat, and Bambauer are Federalist Society-certified, Leo-class clingers. The author chose three flavors of partisan conservatism when purporting to survey the field. I suppose that is why he was picked to post here, why he was willing to post here, and why he is unlikely to ever be found in the mainstream or strong sections of modern American legal academia.
Carry on, clingers. So far as your betters permit. Not a step beyond.
Your litmus test for committed right-wingers is both question begging and ipse dixit. Also, I'm a Barack Obama liberal, so calling me a clinger is, among other things, ironic.
But note Kirkland's talent for stylish invective, when you can get him out of his rut. See how succinct he was above, and the precision with which he metered out critiques.
I don't completely agree with Kirkland's targeting, but few other commenters here even aim.
Come on Kirkland, don't hide your light under a bushel. Try harder more often.
I believe his argument here, stylish or not, is wrong. But I do agree that replacing his usual, tiresomely repetitive shtick with more such substantive arguments, right or wrong, would be a big improvement.
There are some notably good points raised by SimonP above.
A curiosity -- rather than rhetorical -- question: leaving any notion of "education" [whatever that word currently means] aside, what differentiates the speech of a [public or private] university and its faculty from that of an online social media platform and its users?
Well, your question is vague, but obviously one difference is that the faculty of a university are employees.
That the description of this author still refers to "UT" suggests an intent to mislead readers into believing the author is associated with a strong school -- University of Texas at Austin -- rather than a weaker, backwater school (Tennessee).
The declaratory sections (a & b) of 47 U.S. Code § 230 declare the Internet to be a public form, and the definition (f)(1) of the Internet explicitly tells us that the Internet is a network of federal and non-federal networks.
The Internet is substantially composed of federal networks and technology that a social medium platform intrinsically uses in creating its forum within the public forum of the Internet. A social medium platform is inextricably intertwined with the government if it operates within the Internet. It is the situation of Burton v. Wilmington Parking Authority at both the state and also at the federal level.
A social medium platform does not have to host unprotected speech like obscenity, and it could restrict its forum to children (but not specifically to white children).
SCOTUS needs correctly to sort out the public forum and state action issues.
As far as I can discern, every party and every judge in § 230 litigation has believed the Internet operates by magic. According to Nick Nugent’s CV, he should have a better understanding of the non-magical technological issues.
Even correcting for the typo, this is false. (In fact, the word "public" appears nowhere in the statute.)
This is gibberish and wrong. (Social media platforms do not "operate within the Internet," for instance; that's a phrase with no meaning.) Every retail store in the country relies upon public roadways, both to obtain its products and to receive its customers. Not to mention relying on other government services. That does not mean that all stores are "inextricably intertwined with the government."
It has. And it — like Congress — has squarely rejected your notions.
(a) (1) "our citizens"
(a) (3) "forum for a true diversity of political discourse"
(a) (4) "benefit of all Americans"
(a) (1), (3), and (4) refer to Internet and seem to designate the Internet to be a forum for the American pubic, to wit, a public forum.
I am applying ordinary definition of English words. The public forum issue of the Internet and state action issue relative to the Internet need to be correctly argued before SCOTUS. In which decision has SCOTUS addressed either question?
The Internet is a big machine. Every device connected to the Internet is part of the big machine.
The analogy to the phone network should be obvious. The US phone network is a big machine. Every device connected to the phone network is part of the big machine including a customer premises handset.
The road, which passes by the supermarket, is not part of the supermarket. These issues were all settled decades ago -- in some cases over a century ago.
The Internet is not a big machine, except in some metaphorical sense, and of course devices connected to the Internet are not "part of" the Internet. The Internet, which isn't actually a discrete thing, which passes by Twitter, is not part of Twitter. (Or vice versa.)
The Internet is a giant composite completely connected device or machine just as the phone network is a giant composite completely connected device or machine. There is only one major difference. The phone network uses circuit-switching while the Internet does not. The Internet uses packet-switching pervasively while only certain parts of the phone network use packet-switching.
I could probably explain more succinctly. Without an established connection between the user's computing device and the social medium platform's backend server (the Internet is giant device that creates a transitive fully connected network), there is no way to transport information back and forth between the user's computer device and the server. In the phone network, voice communication is not possible without establishment of an end-to-end circuit by means of a plethora of devices connected within the giant device of the phone network.
David Nieporent appears to believe that the phone network and the Internet function by magic.
You said that social media companies are curating top results and therefore space constraints are real. That's demonstratively untrue when the top results are selected by the user's own preferences.
Twitter's advertising tanked because the advertiser service went to shit, not because not brand association. I know this because the only way to see if ones ad is appearing, say, next to a post glorifying Nazis, is too willingly abd voluntarily look up Nazi content. And for all the activist jall monitors who did so deliberately just to make a story, they did not even come close to the sheer depths of bizarre shit that is on twitter, from puppy stomping animations and used underwear sales and diaper fetishes, to really understand what is appearing next to what.
I also know this because I have common sense as a consumer and I have never stopped to wonder what type of posts Pop Tarts ads have appeared next to at some point in a platform, any more than I have interrogated my plumber to discern his opinions on abortion before letting him in my house. It is all, 100 percent performative garbage with no grounding in how people actually think.
"I don't pay attention to this as a consumer so therefore advertisers don't pay attention to this as advertisers" is certainly a thought, but not a very intelligent or informed one. It's not like this is something unique to Twitter; advertisers in newspapers have long insisted that their ads not be next to certain type of content. (Not Nazi content, obviously, but they don't want to be, e.g., near war coverage.)
Grimes, publishers get feedback from advertisers after advertisers get complaints from customers who hate the content they see an advertiser supporting.
It isn't what you see (or any other single person sees) that counts. It is the advertisers' impression of their own feedback from everyone that counts. And that can count a lot, or only somewhat.
Publishers have to manage that, and make decisions on a case-by-case basis, weighing competing interests (often worthy content vs. troubled advertisers) as they do it. Cases differ. Publishers differ. Some publishers have a talent to persuade advertisers to support financially a publication with which the advertisers disagree. They can publish more freely than other publishers who are not so well positioned, or less persuasive.
Publishers left at liberty to choose content at pleasure is the only response adequate to answer such a wide-ranging public policy need. Press freedom depends critically on policies adequate to enable private publishers to continue in business by paying their bills with no more support than the revenue they can raise from their publishing activities. Take that away, and press freedom is doomed.