The Volokh Conspiracy
Mostly law professors | Sometimes contrarian | Often libertarian | Always independent
Social Media, Freedom of Speech, and Common Carriers: Response to Adam Candeub
If adopted by the Supreme Court, Prof. Candeub's approach would be a grave menace to freedom of speech.

In a recent guest post at this site, Prof. Adam Candeub has put up a thoughtful critique of my argument that government cannot use "common carrier" status to severely restrict the rights of social media firms to engage in content moderation on their sites. While I appreciate Candeub's effort, I remain unrepentant. Indeed, one valuable aspect of his argument is that it highlights the dangerous implications of the common carrier theory currently being advanced by Texas and Florida in their attempts to defend their social media laws before the Supreme Court.
Candeub argues that state governments can impose common carrier status on social media firms on the basis that they have "market power" or simply because they must be compelled to "stay in their lane." Either theory would have drastic implications for freedom of speech.
As I have pointed out previously, social media firms certainly don't have anything like monopoly power in the sense of being able to prevent widespread dissemination of speech they refuse to host on their sites. All of the examples of "censored" speech Candeub and others cite - revelations about Hunter Biden's laptop, anti-vaxxer speech, critiques of Covid policy, speech supporting Trump's claims that the 2020 election was stolen from him, and so forth - received wide circulation elsewhere, particularly in major right-wing media outlets, such as Fox News.
If the argument is that posting this speech on social media sites such as Twitter or Facebook would have enabled it to reach a bigger audience or a different group of people from that reachable through other sites, that argument can be used to justify abrogating the speech rights of a wide range of media outlets and other organizations. I explained why here:
Even if Twitter and Facebook don't actually monopolize the market for political information, it's certainly true they reach various potential audiences that are difficult or impossible to reach in other ways. But, if that justifies forcing them to abjure restrictions on content, the same theory would rationalize imposing the same requirements on other types of media. Fox News, the New York Times, the Wall Street Journal, and a variety of other major broadcast and print media outlets also reach large audiences that can't always be easily reached in other ways. By that rationale, they too can be forced to be common carriers!
Candeub is right on one point. It does feel as if he and I "live in different worlds." I live in a world where there is extensive right-wing media ready, willing, and able to broadly disseminate speech that left-wing social media outlets may prefer to exclude - and vice versa. And I live in the world where all of the viewpoints discussed above do in fact enjoy widespread dissemination. Don't take my word for it! Search for them using Google (or any other search engine), and you will quickly see how easy it is to access them - including on sites with large audiences.
There is no monopoly power here. And if mere disproportionate influence - defined as "market power" - is enough to justify government coercion of social media firms to post material they disapprove of, it can justify similar measures against any major media outlet. Fox News could be forced to air left-wing speech it would otherwise reject, the New York Times could be forced to publish more material by MAGA types, and so on.
The "stay in their lane" argument has similar awful implications. The "lane" occupied by social media firms has never been limited to completely "neutral" dissemination of material regardless of viewpoint. They have always exercised editorial judgment, and most consumers want it that way.
Candeub argues this issue can be dealt with by distinguishing "content" moderation from viewpoint restrictions, thus potentially allowing social media firms to still exclude material that constitutes "harrassment" or is otherwise "unpleasant." But content and viewpoint is often closely linked. For example, obscene content or nasty - "unpleasant" - language is often used to underscore a point. Moreover, many users might prefer an experience free of viewpoints they consider offensive or wasteful of their time, such as Holocaust denial or "flat earth" advocacy. Such substance-based curation is a standard feature of social media firms. Every major social media site - including Twitter/X under Elon Musk - engages in it.
The advantage of free-market competition and choice is that people who dislike one firm's content moderation have other options. Candeub notes many left-wingers remain on Twitter, despite Musk's takeover and introduction of rules they dislike, suggesting that proves the firm has "market power." But, in fact, many Twitter users have left since he took over - a 23% decline in US usage since Musk took control in November 2022. Presumably, those leaving include many of those who dislike his policies the most.
The "common carrier" policies imposed by Texas and Florida and defended by Candeub would eliminate most such choice. They would impose a single mandatory system of content regulation on all major social media firms. That kind of coercion is an obvious menace to freedom of speech.
Candeub also brings up the by now familiar analogy between social media firms and enterprises like phone companies and mail carriers. In a previous post on this subject, criticized that analogy as follows:
With rare exceptions, phone calls and letters only reach a small, specifically intended audience. Unless they are illegally tapping the line, the general public does not and should not have access to your phone conversations. Ditto for your mail. By contrast, the whole point of most political discourse on social media is the ability to reach a large audience all at once. But an information product that reaches a large audience simultaneously usually works better if it has at least some moderation rules, and other constraints that enable consumers to find the material they want, while avoiding harassment, offense, and other things that make the experience annoying, unpleasant, or simply a waste of time.
Candeub protests that "common carriers carried newspapers and magazines and other material that was political discourse meant for a large audience." This overlooks the obvious reality that any individual package transported by such carrier was in fact directed at a specific individual or small group. It was not at a site seen by millions of people at once. That, of course, is even more true of phone calls. On social media sites (and other websites with large audiences), the content simultaneously visible to thousands or even millions of people. The latter scenario requires more extensive content control than the former.
Candeub suggests that social media users can control what they see by techniques like blocking content. But such tools are imperfect. Regular users of sites like Twitter and Facebook often encounter content they find to be annoying, time-wasting, or objectionable. Moreover, many users might find it annoying to be constantly having to block material.
Others, by contrast, might prefer to have little or no content moderation. And that's fine, too! The existence of such divergent preferences is an important consideration against letting government mandate a one-size-fits-all content moderation policy for all platforms.
In the last part of his post, Prof. Candeub laments various efforts by governments to "silence critics" by coercing social media firms into taking down content. I too oppose coercion. But it doesn't follow that "Only Texas's H.B. 20 stands against" such dangers. Such claims overlook the obvious alternative of banning coercion across the board: with extremely rare exceptions, government should be equally barred from forcing social media firms to take down content (as the Biden Administration apparently sought to do in some cases) and forcing them to put it up (as Texas and Florida seek to do). Rather than fighting one type of speech coercion with another, we can enforce the First Amendment, and prevent both.
Editor's Note: We invite comments and request that they be civil and on-topic. We do not moderate or assume any responsibility for comments, which are owned by the readers who post them. Comments do not represent the views of Reason.com or Reason Foundation. We reserve the right to delete any comment for any reason at any time. Comments may only be edited within 5 minutes of posting. Report abuses.
Please
to post comments
Is Professor Somin claiming that when people send mass mailings to e.g. everybody in the state, the Post Office or other carrier gets to apply editorial judgment about what to carry and/or gets to alter the content? Such mailings certainly happen. I don’t see the distinction from the way social media companies operate. The social media company sends each and every instance of a post to an individual internet address, just like the Post Office does with physical addresses.
The. Post. Office. Is. The. Government.
So it would be true if done through FedEx? Or if the post office was privatized as has been periodically proposed?
So what justifies common carrier status for telephone companies?
How do you distinguish verizon from Facebook?
Did telephone companies in the 1950's and 1960's really have a First Amendment right to cut off service for the purpose of stopping the spread of Misinformation®™ about racial equality?
Facebook doesn't hold itself out as serving all comers.
You invert cause and effect here, David. Verizon holds itself out as serving all comers because it is compelled to. It is compelled to because it was designated as a common carrier long ago.
So back to Michael's question - if the legislature could designate Verizon as a common carrier and compel it to serve all comers, what is the legal basis for why can't they do the same to Facebook?
Why do you think the legislature can do that? You're assuming that the legislature is imposing an unwanted status on Verizon. But Verizon may be happy to be designated a common carrier, since that doesn't clash with Verizon's business model.
I do not buy into the forced common carrier approach of Prof. Candeub and Texas. But also do not think the states are powerless in this area.
If one acts as a publisher, one is liable for defamatory statements. If one acts as a common carrier, they are not held liable for the content of comments posted by others. Now comes sec230 which, read properly, allows ISPs to remove objectionable materials without being treated as a publisher. But sec230 has been abused to allow platforms to curate content based on viewpoint without being treated as publishers, thus avoiding liability.
Those who argue for the current broad reading of sec230 claim that disallowing viewpoint discrimination is a violation of the first amendment. However, that is a non-sequitur as the first amendment does not protect defamatory speech, and no one is forced to be a publisher. There is choice to be made by the ISP, either accept all comers as would a common carrier or curate content based on viewpoint and accept the responsibility that comes with being a publisher.
1)Government cannot force publishers to refrain from viewpoint discrimination.
2)Holding publishers liable for defamation does not violate the first amendment.
3)Government can treat platforms that curate based on viewpoint as publishers.
Of the three statements above, the first two are undoubtedly true, while the third is at the heart of the Texas Social Media law. Does sec230 preempt states from treating platforms that curate based on viewpoint as publishers? If the present overly broad view of the protection from liability resulting from sec230 is maintained, then preemption by states will very likely be struck down.
But what is to stop any publisher from claiming they are merely a platform that curates content based on viewpoint? What would stop the owner of an ISP from claiming to operate a platform that, finding all other viewpoints objectionable, only hosts viewpoints that perfectly match his own while claiming sec230 protection from liability? The court should consider that a rose by any other name is still a rose and roll back sec230 to maintain the distinction between a publisher and a platform/common carrier.
Lastly, taking account of statements 1 and 2 above and the First Amendment, the penalty a state might inflict on a platform that refuses to carry all viewpoints cannot go further than removal of liability protection. So that an ISP that refused to pay the penalties provided in the Texas statute could be declared by Texas to be a publisher and so face liability.
If you don't want § 230 to exist, then, well, that's a bad idea from a policy perspective, but it's a defensible position. But what's not a defensible position is calling that "abuse" of § 230. That was the entire point of § 230.
It is not. HB20 says that covered platforms cannot "curate based on viewpoint." It doesn't say, "If they do they will be liable for their content."
Yes, of course. It explicitly says so.
That is a legislative choice. Congress can "roll back § 230"; the Court cannot. (And that is not "rolling back" § 230; it is repealing it.)
"But what’s not a defensible position is calling that “abuse” of § 230. That was the entire point of § 230."
That is of course a highly disputed contention, notwithstanding your constant refrains that it isn't.
That is of course a highly disputed contention, notwithstanding your constant refrains that it isn’t.
I don't think it is that disputed, in fact it's pretty much the only way social media is a viable business model.
The only reason it's being "disputed" is that trolls are mad that social media companies find it's in their interest not to carry their content.
Incorrect. The issue is not what makes social media a viable business model, nor whether they should be held liable for any content or anything in particular.
Instead, the issue is precisely what does section 230 mean, and what specifically is the scope of its grant of immunity and preemption of common law and state law. The conclusion from the article I linked below:
"[O]bscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable” in § 230(c)(2), properly read, doesn’t just mean “objectionable.” Rather, it refers to material that Congress itself found objectionable in the Communications Decency Act of 1996, within which § 230(c)(2) resided. And whatever that might include, it doesn’t include material that is objectionable on “the basis of its political or religious content.”
Ok, looking at the article I think it's fair to say that articles can't curate on viewpoint.
Though I think it's also fair to say that Facebook, YouTube, and old Twitter weren't actually curating on viewpoint. Their moderation decisions were generally based on abuse and harassment. The reason Conservatives claimed it was viewpoint dependent is because there were a lot accounts very popular among Conservatives that were really abusive.
The services actually violating 230? Truth Social, Parler, the alt-right networks that popped up because of perceived bias on the mainstream networks. These networks most definitely moderate based on viewpoint.
"Yes, of course. It explicitly says so."
"the Court cannot"
Again, others disagree, e.g. https://www.journaloffreespeechlaw.org/candeubvolokh.pdf
Hello Prof Somin,
You often make very stimulating posts, but I have one quibble. In the future, would you mind posting your articles using the “read more” feature? That way we don’t have to scroll and scroll just to see what other posts there are? See Prof Volokh’s posts. It just makes it easier to navigate the site.
Thank you!
Jeremy
What about Internet Service?
I currently have multiple ways to reach the Internet - multiple mobile networks, fixed line broadband, and wi fi hot spots at coffee shops, etc., so none are monopolies.
I can use the Internet to publicly broadcast to millions of people.
Does that mean that Internet providers are also not common carriers and should be allowed to restrict what information I can upload and the web sites I interact with?
I do not think there is any requirement that a common carrier be a monopoly.
I think it is perfectly reasonable to designate a company a common carrier when:
1. The vast majority of the content it transmits is produced by others and sent with little or no editorial review by the company
2. It transmits a vast amount of information (much more than one person can consumer) to large numbers of people each of whom consumers only a tiny fraction of the information it providers.
3. It is one of the companies with top market share in its industry. (So I think it is reasonable to regulate Tiktok and Youtube as common carriers but not the comments section of Reason)
I do not think that Facebook is indulging in speech as defined by the First Amendment when it restricts vaccine misinformation, for example.
To the extent that people want to be protected from some kinds of offensive content, I think it is perfectly reasonable for a social network to provide an optional content filter - for example, "Only see content that passes Facebook's automated content filters."
Issues like content demonetization and whether people can be coerced to use the automated content filter by schools, employers, etc. are open issues that would need to be resolved.
I like your logic but your rule 3 is unworkable. It doesn't put companies on notice beforehand of what behavior will impose the obligation to act as a common carrier. Market share is not something a company can control. How, for example, would Tiktok even know that it is approaching whatever threshold you set, much less make its user stop using it enough to stay below that threshold? Tiktok's market share seems obvious in hindsight but for the companies to be on prior notice, the rule has to be workable in foresight. I don't see it here.
Why does it have to be workable in foresight? Modern antitrust law certainly is not - companies often do not know if a merger will be approved.
Certainly the FCC could create more detailed regulations on what triggers common carrier status.
What does it mean to designate a company or a class of telecommunications/internet companies as common carriers?
1) Does it mean to regulate them with some type of public accommodations law or by prohibiting viewpoint discrimination, for example, or 2) does it just represent a constitutional legal conclusion that they could be regulated thusly without running afoul of the First Amendment?
I certainly think states should be free to do things like Texas is doing here. Whether it’s actually a good idea in all of its details, I don’t know, but that’s the beauty of letting states do things and seeing what works for different political preferences.
Overall, social media companies are much more like email and telephone than the pro-big tech side wants people to believe. Most people just use it to communicate with their friends. Somin argues that there's no monopoly power if the information Facebook censors is available elsewhere. But that's not the issue. Facebook was preventing people from communicating certain disapproved information with their friends and family.
The analogy of of social media to telephone and mail companies doesn't work because there is a higher level of carrier that people ignore. The ISPs I think are indeed common carriers connecting people to the internet, but individual sites are then what is sent. That is all internet sites are the communications, not the carriers.
The irony is that the same Republicans pushing these Facebook/Twitter nationalization laws under the guise of regulating "common carriers" had a tantrum when the FCC tried to regulate ISPs as common carriers under the name "net neutrality."
I'm ambivalent about treating social media companies as common carriers, but Somin's arguments comparing Facebook and Twitter to Fox and the NYT are not good.
If you ask people if they think Social Media is more like a newspaper/cable news station or a group text service from your phone provider, I think they'd say the latter.
If they’re not a common carrier, then they should be responsible for libel when they give a false and defamatory reason for censoring content.
They are responsible for libel if they give a defamatory reason for "censoring" content. "We blocked your post because of X" is a statement of the company, not a third party, and thus isn't covered by § 230.
But… they don't give defamatory reasons for blocking content. They say, "It doesn't meet our standards." Which as a matter of law is not defamatory.
Oh, really! ..e.g., the reasons given for censoring the Great Barrington Declaration and its authors?
You can't make a call through a phone company without using a phone, either theirs or a customer owned phone, and can't make many calls without using backhaul infrastructure.
ISP's provide connection to the internet, but to actually use it you have to have a means of communicating that is common to the people you are trying to communicate with. The social media companies provide these means. They also have settings for visibility of your communications. While their reasoning may apply to posts that are visible to everyone, they should have no ability to censor posts with visibility limited to friends or individuals.
My difficulty with the “curated content” argument is that, while the executives and lawyers of these companies may think this feature is essential to their business model, I do not think tuis is at all the case from the point of view of the users. Curated content promarily serves the interests of the companies by increasing their advertising revwnue. From the user’s point of view it is mostly a burden. “Curated content” is not entirely, but in practice is largely targeted advertising and pay-for-audience posts, features more analogous to an advertising firm cum vanity press than a genuine traditional punlisher.
I think most users would be perfectly happy to have posts come in in the order of receipt, with features to order and search themselves and do their own curation. And if users had the power to do that, I suspect most of these advertisements and vanity posts that are such drivers of social media companies’ revenue would end up getting filtered out.
It is the point of view of the business’ users, not its revenue model, that determines susceptibility to common carrier status and discrimination law.
I think most of this “curated content” is bullshit. Social media companies are using their market power to take advantage of their captive audience – an audience mostly there for an entirely different reasons and that mostly wants to do entirely different things – to bombard them with targeted advertising and vanity posts most users wouldn’t actually want if they had the power to avoid it. The actual use most users want to make of social media is not the source of the revenue. Most users find the revenue-generating stuff a damn nuisance, a distraction from their experience and needs, that they are forced to put up with to be able to access it for what they actually need it for.
And that’s what this is really all about.
It is the way users use social media, not the way the businesses currently make money, that determines susceptibility to common carrier and discrimination law status. It’s not the plans the owners currently have for a warf that determines its susceptibility to state regulation to ensure public use. It’s the use the PUBLIC wants to make of it.
The public use element, use from the point of view of the public and not the business owners and their lawyers, has never even been acknowledged by opponents of common carrier status. They act as if the state has to blindly accept whatever they claim their business is.
If the owner of the only wharf in town used the wharf solely to publish a paper, the town could still require the owner to give berth to any boats coming in. The owner of the town’s only wharf doesn’t get to decide what business he’s in. If he has a wharf, the town can force him to be in the wharfing business, and take all comers, whether he wants to or not. It just doesn’t matter what business he’s in. What matter is whether he has assets the public uses or needs to use as utilities. That’s the critical point.
If the wharf owner wants to publish a newspaper on top of running a wharf, he of course can. If he doesn’t want to run a wharf, he can split the businesses, sell the wharf to someone wlse, and publish his newspaper elsewhere or in a limited space on top of the wharf. But publishing a newspaper on top of a wharf, calling the entire business a newspaper business, doesn’t shield the owner from the town’s needs or its power to regulate the wharf, as a wharf, to meet its needs.
Same here. Social media companies run their “curated content” on top of critical public utility infrastructure the public uses for ordinary communications, same as a wharf owner running a newspaper on top if a wharf. The state can require social media companies to open up the underlying infrastructure to public use.
Counterpoint: no, it isn't. ReaderY is just making shit up again.
My difficulty with the “curated content” argument is that, while the executives and lawyers of these companies may think this feature is essential to their business model, I do not think tuis is at all the case from the point of view of the users. Curated content promarily serves the interests of the companies by increasing their advertising revwnue. From the user’s point of view it is mostly a burden. “Curated content” is not entirely, but in practice is largely targeted advertising and pay-for-audience posts, features more analogous to an advertising firm cum vanity press than a genuine traditional punlisher.
It figures that ReaderY is a Communist who thinks it's appropriate for the government to eliminate core aspects of a private company's activities because he figures they're mistaken about whether those activities are essential to their business model.
What Prof. Candeub doesn't seem to understand is that the content moderation and curation is what makes or breaks a social media site, and without some control on the part of the site it will degenerate into a mess that nobody wants to use.
Anybody who remembers usenet understands that phenomena all too well.
So, while perhaps his and EV's arguments that the government *could* regulate regulate social media as a "common carrier" are legally sound (I'm skeptical, but will stipulate to that for the moment), imposing common carrier status will make the sites useless for most people. Resulting in *less* exchange of ideas, not more.
Of course, the above assumes that any common carrier rules imposed would not be the self-contradictory impossible-to-follow mess that is the poorly crafted Florida and Texas laws.
Suppose congress made a law that said no provider an interactive computer service shall be held liable on account of any action voluntarily taken in good faith to restrict access to or availability of material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable. Would that suffice to prevent a site from degenerating into a mess?
The choice is not to have sec230 or not. The middle ground is to read sec230 to allow a platform to moderate comments in good faith while not being held liable as would a publisher. But engaging in viewpoint discrimination where no obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable language is present would result in a platform being treated as a publisher.
Why would a company be moderating speech that it didn't find to be objectionable? Just arbitrarily, for funsies?
(Throws dart at dictionary; it hits the word "tree." "Today we're going to delete all pro-tree posts.")
So any basis for determining content to be objectionable is “good faith” in terms of sec230? That is what you are hanging your hat on?
It is no wonder why the proponents of a broad reading of sec230 engage in so much misdirection -look a first amendment squirrel!
Obviously not! In the comment to which you're responding, I just gave an example of a basis that wasn't! Random, arbitrary determinations.
Not a realistic one, of course. Because — as I said — why would a company want to moderate content that it didn't find objectionable?
Essentially, this all boils down to "Do you want the government telling private individuals what they can't say and also telling them what what they *must* say?"
If your answer is "yes", please turn in your libertarian membership card.
Ok. So do you or don’t you want government telling the telephone company, a private corporation, what it must say over its wires?
Are you prepared to turn in your libertarian card?
Sigh. The phone company is a classic example of a "common carrier". Websites are not.
Mike Masnick (coiner of the phrase "Streisand Effect") does a nice job of explaining why it makes no sense for websites to be considered "common carriers"
https://www.techdirt.com/2022/02/25/why-it-makes-no-sense-to-call-websites-common-carriers/
"... social media does not meet any of the core components of a common carrier. It is hosting content perpetually, not merely transporting data from one point to another in a transient fashion. It is not a commodity service, but often highly differentiated in a world with many different competitors offering very differentiated services. It is not a natural monopoly, in which the high cost of infrastructure buildout would be inefficient for other entrants in the market. And, finally, even if, somehow, you ignored all of that, declaring a social media site a common carrier wouldn’t change that they are allowed to ban or otherwise moderate users who fail to abide by the terms of service for the site. "
I don't think the article above is a fair summary of Prof Candeub's article, making much of this rebuttal a strawman. Furthermore, Prof Somin continues to fail to distinguish social media companies from transportation or hotel companies - if those can be arbitrarily classified by legislature as common carriers (and clearly they were), why can't these be similarly classified?
I don't know whether these are good laws are not. (I'm inclined to think not on Freedom of Association grounds.) But these hyperbolic claims that they will destroy Free Speech do the debate no good.
Because transportation and hotel companies aren't in the speech business. The 1A isn't implicated by regulating them.
Since the law specifically does not address anything that the company says in its own name, the only free speech claim is which customers they choose to buy their services. Choice of customer is indeed a 1A issue but under freedom of association, not speech. And in that regard, transportation and hotel companies are indistinguishable from social media companies.
But yes, the better analogy is probably to telephone and telegraph companies who also distribute the speech of their customers and are also subject to common-carrier restrictions.
Again, no, it doesn’t work that way. When Barnes & Noble decides to stock a book — or not — it is engaging in protected speech by so doing. That is not a freedom of association claim; it’s a free speech claim. When the Miami Herald chooses not to print a statement from a political candidate, it is engaging in protected speech by so doing. That is not a freedom of association claim; it’s a free speech claim. And when Twitter chooses not to distribute anti-vaccine tweets, it is engaging in protected speech by so doing. That is not a freedom of association claim; it’s a free speech claim.
1. In the Miami Herald case, forcing them to carry some item would have “exacted a penalty” and displaced other speech because a newspaper has limited physical real estate. That’s not true of internet communications networks. And the speech it would displace is of course, the newspaper’s own speech, as their own speech is the primary product that a newspaper produces, unlike an internet communication service which primarily transmits others’ speech.
2. Your logic would apply equally to saying that if telephone providers choose not to carry speech they dislike, or email providers, or mail services, that it’s a free speech claim.
3. Fundamentally, there is a contradiction in saying that if the government makes you carry/transmit/say something, then it is compelled speech for 1A purposes but not even your speech at all for defamation purposes.
4. Relating to #3, maybe this difficulty in saying something is compelled speech, when no reasonable observer would interpret or mistake the speech as genuinely coming from the compelled party, explains why, IIRC, the Miami Herald case seemed to touch on the issue of chilling other speech rather than compelling speech.
"Even if a newspaper would face no additional costs to comply with a compulsory access law and would not be forced to forgo publication of news or opinion by the inclusion of a reply, the Florida statute fails to clear the barriers of the First Amendment because of its intrusion into the function of editors." Miami Herald v. Tornillo, 418 U. S. 241, 258 (1974)
Next line: “A newspaper is more than a passive receptacle or conduit for news, comment, and advertising. The choice of material to go into a new paper, and the decisions made as to limitations on the size and content of the paper, and treatment of public issues and public official — whether fair or unfair — constitute the exercise of editorial control and judgment.”
So, quite different from communications networks, including internet, telephones, email service, mail service, and, at least as to the bulk of their function and use, social media networks. Those things are much more like passive receptacles or conduits – as your own view regarding exactly “whose speech” is in a social media post squarely supports.
“the Florida statute fails to clear the barriers of the First Amendment because of its intrusion into the function of editors.”
A few lines up, the opinion explains this defect further: “Faced with the penalties that would accrue to any newspaper that published news or commentary arguably within the reach of the right-of-access statute, editors might well conclude that the safe course is to avoid controversy. Therefore, under the operation of the Florida statute, political and electoral coverage would be blunted or reduced.”
So the point is, even if they are not forced to forgo other speech and even if there were not additional print costs, it still chills speech by intruding into their function as editors, namely, the function of choosing their own speech that they want to say/publish.
There's no "contradiction." A legislature can always provide more protection from liability than the 1A requires. That's what § 230 is.
It's like saying that there's a "contradiction" between regulations treating an SUV as a truck for gas mileage purposes but as a car for crash-safety purposes. (I made up that example; it's a hypothetical.) One can argue that the law shouldn't do that, but there's nothing that says it can't do that.