The Volokh Conspiracy
Mostly law professors | Sometimes contrarian | Often libertarian | Always independent
Supreme Court Seems Likely to Strike Down Florida and Texas Social Media Laws
The laws violate the First Amendment because they require social media sites to abjure most content moderation, and platform speech they disapprove of.
Yesterday, the Supreme Court heard oral arguments in NetChoice v. Paxton and Moody v. NetChoice, cases challenging Florida and Texas state laws barring major social media firms from using most types of content moderation, thereby requiring them to host content they disapprove of. The oral arguments suggest a clear majority of the justices believe these laws violate the First Amendment rights of social media providers. I agree with the assessment of my Cato Institute colleague Thomas Berry, who said "It appears that a majority of the Court is likely to find that the laws violate the First Amendment, at least when they force traditional social media sites like Facebook and X to change their moderation practices and disseminate speech they want to exclude."
Justice Elena Kagan summarized the issues best, when she noted, in the Florida argument, that, if social media firms have "content-based restrictions" on what kinds of speech they wish to host (e.g.—by keeping out what they consider "misinformation… [or] "hate speech or bullying") "why isn't that….a classic First Amendment violation for the state to come in and say, we're not allowing… you to enforce those sorts of restrictions even though… it's like an editorial judgment, you're excluding particular kinds of speech?"
Chief Justice John Roberts similarly emphasized that "[t]he First Amendment restricts what the government can do, and what the government is doing here is saying, you must do this, you must carry these people; you've got to explain if you don't,…[t]hat's not the First Amendment." Liberal Justice Sonia Sotomayor that the two states' laws are "so broad that they stifle speech just on their face."
If the New York Times or Fox News refuse to publish articles I submit to them because they disapprove of my views or even just because they think my writings will offend their audience, they surely have a First Amendment right to do so. If I don't like Fox's editorial policies, I can submit my content somewhere else. The same reasoning applies to Twitter or Facebook.
The states argue big social media companies have a special status because they reach so many people. But the same is true of major traditional media firms. If the New York Times rejects an op ed I submit, and I end up publishing it in The Hill or the Boston Globe (such things have actually happened to me!), I am likely to reach a much smaller audience than if the piece was accepted by the Times.
As with NYT or Fox News, social media firms seek to create a curated forum that caters to the interests of their audience, and avoids unnecessarily annoying or offending them. Few users actually want a completely unmoderated social media environment, or one that accepts all content that isn't illegal. Sites with right-wing owners, such as Elon Musk's Twitter/X or Donald Trump's Truth Social nonetheless have content-based restrictions in their terms of service.
Samuel Alito and Clarence Thomas—the two justices most sympathetic to the states—repeatedly characterized social media content moderation as "censorship." Justice Brett Kavanaugh effectively responded to this trope:
When the government censors, when the government excludes speech from the public square, that is obviously a violation of the First Amendment. When a private individual or private entity makes decisions about what to include and what to exclude, that's protected generally editorial discretion, even though you could view the private entity's decision to exclude something as "private censorship."
I think that's exactly right. If Fox News or the New York Times reject my content because they don't like my views, that is not censorship, but the exercise of their own First Amendment rights. The same goes if Elon Musk bars me from posting on his site. And that's true even if Fox, NYT, or Musk object to my content for dubious reasons, or even downright stupid ones. Ditto if they treat right-wing speech more favorably than the left-wing kind, or vice versa.
I think it's clear there are at least five or six justices who accept the distinctions made by Roberts and Kavanaugh, and therefore are inclined to rule against Florida and Texas on that basis.
In the Florida case, several justices suggested they might not be able to uphold the lower-court ruling against the law, because that state's legislation is so broad that it may cover websites that aren't expressive in nature at all, such as Uber or Etsy. The social media firm plaintiffs brought a facial challenge to the law, which may require them to prove that the law is unconstitutional in all or nearly of its applications. If the Court vacates the lower court decision on this basis, the case could be remanded, and the plaintiffs might have to amend their complaint to turn it into an "as applied" challenge focused on social media firms that exercise editorial discretion. Justice Sotomayor suggested they might remand the case, but also leave the preliminary injunction against the Florida law in place, in the meantime.
Fortunately, these kinds of procedural issues are much less significant in the Texas case, where the law in question is more clearly focused on big social media firms. In oral argument, Texas Solicitor General Aaron Nielson conceded his state's law does not cover firms like Uber and Etsy.
Thus, the Supreme Court could potentially vacate and remand the Florida decision, but rule against Texas. The precedent set by the latter ruling would govern any future litigation in the Florida case, and challenges to similar laws that might be enacted by other states.
The justices also discussed the states' argument that it can bar content moderation because social media firms are "common carriers." I think most of the Court did not find that theory persuasive. rightly. I criticized the badly flawed common carrier theory in some detail here.
Finally, there was much discussion of the issue of whether the tech firm plaintiffs' arguments that they are exercising editorial discretion somehow undermine their exemption from liability for posting user content under Section 230 of the Communications Decency Act. To my mind, this issue isn't really before the Court. And in any case, there is no real contradiction between holding that the tech firms are engaging in First Amendment-protected speech when they moderate content, and also holding that such speech is exempt from certain types of liability under Section 230. But I am no Section 230 expert, and I will leave this issue to commentators with greater knowledge of the relevant issues.
In sum, I am guardedly optimistic that the free speech will prevail in these cases, though procedural issues might lead to a remand in the Florida litigation.
In previous posts, I have explained why the Texas law is a threat to freedom of speech, and argued that these laws violate the Takings Clause of the Fifth Amendment, as well as the Free Speech Clause of the First Amendment (the takings issue is not before the Supreme Court).
For those keeping score on matters of ideological and jurisprudential consistency, I refer you to the relevant part of my September 2023 post about these cases:
I consistently opposed the Texas and Florida laws both before and after Elon Musk acquired Twitter (now called X). I didn't much like the content moderation policies of the pre-Musk management, and I like Musk's policies even less. But they nonetheless both have a First Amendment right to decide which speech they wish to host, and which they don't….
I am also one of the relatively few people who simultaneously support the Fifth Circuit's recent decision to bar the White House and other federal officials from coercing social media firms to take down content they deem "misinformation" and oppose that same courts' decision (with a different panel of judges) upholding the Texas social media law. The First Amendment bars government from both forcing social media firms to take down content the state disapproves of and forcing them to put up content the firms themselves object to.
To get the Volokh Conspiracy Daily e-mail, please sign up here.
Editor's Note: We invite comments and request that they be civil and on-topic. We do not moderate or assume any responsibility for comments, which are owned by the readers who post them. Comments do not represent the views of Reason.com or Reason Foundation. We reserve the right to delete any comment for any reason at any time. Comments may only be edited within 5 minutes of posting. Report abuses.
Please
to post comments
I found it very puzzling that some justices had so much trouble with that (lots of people do), but Clement explained it very well: there would be a contradiction if Section 230 didn’t exist. But the entire point of Section 230 was to create the legal regime in which Internet companies are treated differently for liability purposes than they would’ve been treated at common law.
There was a bit of desperation coming from Gorsuch and Alito, feeling around for any sort of foothold. That’s what that 230 discussion felt like to me. They get noticeably upset when they’re not “winning.” Thomas at least knows how to take it in stride / doesn’t care.
And Alito’s obsession with “censorship” that earned him the day’s most humiliating moment of audience giggling after the General’s takedown left him all flustered. That was my favorite bit.
really? I thought that was Thomas with the “you kids these days and your words, back in my day censorship was censorship!” quip
No it was dumb Alito. Thomas only barely got comfortable asking questions recently, he’s not really attempting to score rhetorical points yet.
Trying to keep this short but this comes right after she finally figures out how superficial his “censorship” question really is and finishes pointing out how like the website designer in 303 Creative could be said to be “censoring” gay couples…
GENERAL PRELOGAR: You know, I think that the particular word you use doesn’t matter. What you have to look at is whether what’s being regulated by the government is something that’s expressive by a private party, and, here, we think you have that.
JUSTICE ALITO: Well, I mean, the particular word that you use matters only to the extent that some may want to resist the Orwellian temptation to recategorize offensive conduct in seemingly bland terms. But, anyway, thank you.
(Laughter.)
Randal, The laughter is at your naivete.
In oral argument there are no quotes. So you put ‘censoring’ in quotes to suggest that the Court had ruled about censoring gay couples in 303 Creative — and of course the they only censorship — for the 3rd time — was Scotus censoring the anti-religious bad behavior of the Colorado judges !!!!
Are you gay, Randal. That is my bet
I think the real complaint here isn’t a denial that this is true, but instead that this is over-stated.
Section 230 was absolutely intended to permit platforms to get away with at least SOME forms of moderation that were legally perilous under the common law regime, where the surest way to avoid liability was to not engage in moderation at all. It was section 230 of the Communications Decency act, allowing SOME censorship was the point! Just not the sort the platforms are engaged in today…
The problem is that catch-all, “or otherwise objectionable” got interpreted, not according to ejusdem generis, but as just licensing complete editorial control. So a law intended to permit taking down snuff films and pornography ended up creating a regime where the pornography remained, and selected political speech got taken down instead.
Now roll in threats to clobber 230 unless they started censoring “harrassment”, which would crush their business model.
The Democratic candidates even had a discussion during a 2020 debate on how to best punch corporations to force them to do this, under penalty of 230 deletion, or something else, or both.
Of course the Republicans later responded with threats to delete it if the companies continued to obey that mandate. None of this is free speech.
You continue to be way overtuned as what counts as a threat to Silicon Valley.
Regulation from Washington is a mortal threat to most large companies today, monopolies and near-monopolies not excepted.
Federal relations people know the lay of the legal land, and section 230 is not in danger.
Revoking Section 230 isn’t the only threat from Washington, as someone more perceptive than you might have gleaned from my mention of monopolies.
But Joe Biden wants to revoke Section 230 anyway: https://www.politico.com/news/2022/09/08/white-house-renews-call-to-remove-section-230-liability-shield-00055771
Yes, this seems a very serious political push by the President.
So, you’re saying, essentially, that he IS pushing it, but it doesn’t count because he’s not pushing it seriously?
He’s disguising an admission of fact as an attempt at sarcasm.
No, actually, I was being sarcastic.
We all know what a political push by a President, especially this President with his deep connections to Congress, looks like.
This is not that.
You’re savvy enough to see that, you just don’t wanna.
It seems like one of the very few things he can remember for 30 minutes, much less 30 months: https://www.theverge.com/2020/1/17/21070403/joe-biden-president-election-section-230-communications-decency-act-revoke
During the election, now?
Your timeline and source of threats makes no sense.
Your ever-shifting excuses and conditions make no sense.
I’m telling you over and over again you’re wrong about the threat to 230, and tech companies agree with me. And none of the sources you bring establish either an actual threat to 230, nor that tech companies are being blackmailed by said threat.
It’s kinda funny you think my argument is shifting or I’m making excuses; it’s not changing, you just keep coming up with new stories that come nowhere near establishing an actual likelihood 230 is going down.
JOe was bottom,10 in his law class,And we are talking Syrcacuse !!!
As lazy and dumb a man as I’ve seen in high office in my lifetime.
And as Krayt observed below, these companies regularly knuckle under to foreign demands for censorship. You are grossly misrepresenting the threshold of threat that is required for these companies to censor their users. There’s no magic change in perspective that makes them more resistant to demands from US politicians they like than from third world honchos.
No, Krayt and you see companies doing stuff you don’t like and you blame the government because the free market would never hurt you like that.
You think Google obeying Indian demands for censorship is the free market?
https://www.theguardian.com/world/2023/sep/25/a-tool-of-political-control-how-india-became-the-world-leader-in-internet-blackouts
You’re becoming sadder and sadder in your lies.
Do you think my thesis is that tech companies *never* give in to the demands of governments?
Because that’s not anything like what I said – the point is that there is no evidence US government is blackmailing tech companies with threats against section 230.
No evidence, except all the threats by the President to greatly limit Section 230, the attempts to do so in Congress, advocacy on how to do so via the courts, the DOJ trying to get the Supreme Court to limit the law…
https://www.nytimes.com/2023/02/20/opinion/facebook-section-230-supreme-court.html
https://thehill.com/policy/technology/3767300-doj-warns-supreme-court-against-overly-broad-section-230-reading-in-google-case/
We’ve seen the internal workings of tech companies, and they (correctly) don’t see the threat.
Sorry, the banning of Nazis is a consumer-facing act, not a secret government cabal.
There absolutely is a magic change: they knuckle under to foreign demands — in places where they have operations — because those foreign demands have legal force.
“The problem is that catch-all, “or otherwise objectionable” got interpreted, not according to ejusdem generis, but as just licensing complete editorial control. So a law intended to permit taking down snuff films and pornography ended up creating a regime where the pornography remained, and selected political speech got taken down instead.”
No.
Section 230 arose because of a particular case (Stratton Oakmont) that allowed Prodigy to be treated as a publisher of third-party information (defamation related to financial information) because it had a board that had civility guidelines as well as keeping it “on topic” that were enforced and also removed offensive language, like curse words.
Again, other than that … doing great with the analysis.
I think the issue is “publishing” didn’t keep up with the times and what was actually going on. The virtual forums were acting more akin to normal (real life) forums, because there was no pre-publishing editorial control anymore.
But like a public forum in real life, there could be post-“publishing” editorial control. If you’re got a town square, and someone starts showing a porno in it, the cops can come and shut it down. Just like in a virtual forum, if someone put up a porno, Facebook could come and shut it down. But doing so before it went up was much more difficult (Yes, with technology not impossible, but there’s always those borderline cases)
It was that difference in the lack of pre-publishing editorial control that was major.
Not a comprehensive understanding, but a valid insight as far as it goes.
As a fundamental issue, most people completely misunderstand the history and the purpose of Section 230.
First, if you see anyone making an analogy to a “town square” or to any government function, they are doing it wrong. Section 230 should not, and cannot, be analogized to how the government operates- because it has nothing to do with that. Instead, it is a very specific form of liability protection for private parties.
Next, it is not some kind of “get out jail” free card. Instead, it is stating that when someone commits an act that otherwise would allow liability (usually defamation, but other acts as well), then the party that did the act is liable, not the platform that hosted the speech. Put more simply- if someone defames you on Facebook, you can sue the person that defamed you, but not Facebook.
Finally, it allows this protection even if the platform engages in moderation. In other words, it was recognized early on that the internet was both like publishers (in the sense that people were publishing their own comments on these various websites and platforms), but also unlike publishers (in the sense publishers had a very limited space and therefore could pick and choose the content to a much more granular level). The internet (as well as on-line services) was, even then, growing at an incredible rate. And it was understood that if platforms could not engage in any moderation at all, they would simply shut down these places that allow the exchange of information. On the other hand, given the volume of speech, the idea that the platforms would be liable for any and all speech if they moderated was also intolerable and would throttle the industry in the bud.
That’s why we have Section 230. It simply carves out a protection so that liability will only attach to the actor, not the platform.
Now, maybe there are tweaks are changes people want to make to this. Personally, I think that this would be a terrible idea, but it’s just legislation. If people want it to change, then go through the legislative process and change it. Going to the courts and trying to get them to change it is a terrible idea for a whole lot of reasons- not just the reliance value, but also because it is fairly nonsensical; after all, if you know anything about how courts are supposed to work, you know that they are actually supposed to maintain the same reading of statutes because, unlike constitutional readings, the LEGISLATURE CAN CHANGE THEM at any time.
Anyway, like most things, I recommend people actually do the work and try to understand the issues rather than default to whatever their preferred partisan outcome they’ve been led to believe might be. Your final opinion might not change, but at least you’ll be better informed when making arguments.
That all seems quite sensible and I can’t fathom why people are still wrestling with town square/publisher/common carrier stuff. It overlaps, sure, but when it overlaps all three, you really are talking about something else.
The thing is, many people don’t have the first fucking clue what § 230 says. You will see, all over Internet forums/social media, people confidently stating that § 230 says such-and-such when it actually either says the opposite or just doesn’t say anything about that topic at all.
(Yeah, I know: “don’t have the first fucking clue” is basically the motto of social media, and is not in any way limited to § 230. Roughly 4 trillion people have posted that Trump needs to post a bond for $450 million to appeal, which is just totally wrong.)
My favorite, though, is when people say, “I think § 230 is okay; we just need to amend it slightly to do…” and then they say something which actually involves entirely repealing it. “Platforms should only get § 230 protections if they don’t do the thing that § 230 protects.”
“And it was understood that if platforms could not engage in any moderation at all, they would simply shut down these places that allow the exchange of information.,
I don’t think anything of the sort was widely ‘understood’.
It was Section 230 of the communications decency act: The authors weren’t concerned about forums being shut down, they were concerned about them NOT being censored. (In the manner the authors wanted them to be…)
Section 230 was to take away the, “But if we moderate we’ll become liable!” excuse for NOT moderating. It was a different era, the platforms typically DIDN’T want to have to moderate!
Again, you show how completely you misunderstand history.
Laws have more than one component. The CDA was passed for a lot of reasons, and had more than one specific issue in mind. The particular provision was passed as a result of a court decision- not because of DECENCY, but because of Stratton Oakmont.
The idea that you can just simply make up stuff, which you have done multiple times, because you’ve suddenly decided you want to pontificate about something isn’t a good look. Seriously, if you want to have an informed opinion, you have to start by becoming informed.
Are you new to the Internet? Are you new to Brett Bellmore?
‘It was a different era, the platforms typically DIDN’T want to have to moderate!’
Of course they did. For one thing they were growing communities and wanted to keep out porn, scams, nazis and edgelords. *Young people* used Facebook back in the day, and they had some sense of responsibility for the online safety of their users.
It’s like saying that Germany didn’t want to have to invade Poland in 1939. It’s not just wrong; it’s the exact opposite of reality.
That’s why we have Section 230. It simply carves out a protection so that liability will only attach to the actor, not the platform.
Note, platforms are publishers. Every critical practice they use to conduct their operations is characteristic of publishing businesses—and less so of other business models.
With the remark quoted above, you example what you condemn. In almost every case of liability for libel, a publisher is an actor, and typically an actor more effectual in inflicting damage than the author/contributor.
The author or contributor usually did nothing to pre-assemble the audience. The publisher did that, very deliberately, with intent to benefit the publisher and to make money by doing it.
The author or contributor would typically be powerless to command a broad audience unassisted, and thus enormously multiply the damage inflicted on an innocent third party by a libel. It is the publisher which does that, and for its own benefit. It is to enable monetization of audience attention by sales to would-be advertisers that the publisher organizes and bears the expense to distribute broadly the author/contributor’s often uncompensated submission.
The publisher typically is the party to the publication better placed to evaluate with at least rudimentary expertise whether a statement is libel. It is senseless to suppose that, compared to publishers, members of a cross section of the public are comparably skilled, or comparably interested, and least of all that they remain mindful of any material interest or consequence for libel. Publishers all but universally keep consequences of libel in mind—unless special government dispensation frees them from that responsibility.
It is typically the publisher which has more to lose than the contributor. And not just more to lose in terms of libel liability, but in all other terms relating to the credibility and value of the publisher’s offerings to the public—not to mention the ongoing value of the publisher’s business.
What happens in the typical case of an author/contributor who is judgment proof? If he commits a libel, he may suffer the inconvenience to change his pseudonym. His damaged victim will be out of luck trying to find a lawyer to pursue a case to redress actual damages, no matter how severe. It is of course savagely ironic that the publisher who did far more to inflict the damage, and to extend its scope geographically, got to do so under protection of Section 230.
What Section 230 actually did was to enact into law an attractive, well-meaning, and politically popular fraud. It amounts to legal insistence that liability for libel still exists, but mostly without anyone positioned to inflict libel damages, or subsequently to pay for them. Section 230 pretends continuing relevance under law for the concept of libel, but at the same time disappears as if by magic the only parties likely to be held accountable as libelers.
There are exceptions, but publishers—not author/contributors—remain almost always the principal actors who inflict damage in libel cases. That was so when I myself ran publishing businesses decades ago. It remains so now.
Shorter Lathrop (it’s impossible not to be!): “Websites are publishers. Except they’re totally different for all these reasons, but that just shows why they should be treated like publishers.”
Publishers do not pre-assemble audiences. They publish stuff. The audiences assemble themselves (or not). And of course assembling an audience isn’t a tortious act, so it’s irrelevant.
But of course that’s not the case in the context of UGC on the Internet, which is why platforms should not be treated as publishers.
But of course that’s not the case in the context of UGC on the Internet, which is why platforms should not be treated as publishers.
Thanks for reading his post so we didn’t have to. As soon as I saw “publisher” and “Lathrop” my eyes glazed over.
Jmaie, good news!
You see better with your eyes glazed over, than you can while you keep them shut tight.
But even if all you intend is to use Nieporent as a seeing eye dog, you should not rely on him. See below.
Publishers do not pre-assemble audiences.
Nieporent offers up yet another ipse dixit. How would Nieporent even know? Does that insight come from Nieporent’s deep well of institutional publishing experience?
Here is a hint: try to sell advertising without a pre-existing audience already provably established, and see what happens. Alternatively, try to bootstrap a new publication without ability to sell advertising sufficient to defray production costs; see how much capital you have to shovel in before you break even, if you ever do.
Or, just look around. Make it a point to notice how much advertising gets done to inform the advertising community that a new publication is in the works—and how much more direct-to-the-public advertising it takes afterward, to convince the advertising community you really do intend to get an audience lined up, before an ad sales effort begins.
Nieporent knows nothing about any of that, so he says it doesn’t happen. He does that to convenience a point he hopes to make on the basis of . . . what? Nieporent, do you have any standard at all for your comments, other than, “Make it sound plausible?” Do you ever ask yourself, “What is driving me to publish these assertions I know so little about?”
Also it is peculiar—even with Section 230 in force—especially with a lawyer doing it—to insist that Joe Keyboard, the UGC guy on Facebook, is not less informed about libel than the legal staff at Facebook is. Probably, Nieporent did not even mean to say that, but only wanted to sound as if he had said it, in case sounding that way would convince anyone.
Nieporent, you are illustrating for me my point, that with Section 230 Congress enacted into law an attractive, well-meaning, and politically popular fraud. You were attracted, you continue well-meaning, your advocacy is politically popular, but alas . . .
Ignorance about practicalities necessary to keep an institutional press business going account for the blunder by Congress. Ignorance among internet fans—like Nieporent—account for why it has so far proved futile to focus enough public attention on what it will take to fix the blunder.
At least by now a small increment of progress has accomplished enough to get almost everyone focused to notice that the present internet publishing system dissatisfies too many people. Perhaps the next increment will be to convince those old enough to ask themselves whether the pre-Section 230 internet publishing system made them feel more hopeful, or less hopeful, than they feel now.
If that could happen, substantive discussion about a better future for internet publishing could begin in earnest.
So by “pre-assembling an audience” you just mean advertising a product, like Campbell’s does with soup?
How would the legal staff at Facebook know whether a particular statement by a random Facebook user is true or false?
So by “pre-assembling an audience” you just mean advertising a product, like Campbell’s does with soup?
Nope. I mean having the audience already in place, with a demonstrable record of reliable attention to your publication—a record which can be ascertained by a third party expert. Think Nielsen Ratings, extended to all media.
How would the legal staff at Facebook know whether a particular statement by a random Facebook user is true or false?
Not the first question a publisher wants answered. The first question is whether a statement is potentially libelous if false. If it comes from a source of unknown reliability, an affirmative answer to that question has a high probability to kill the would-be contribution. But the response can vary depending on many, many factors, including especially the results of attempted confirmation.
I assume that at some point there will be an AI application reliable enough to answer the first question to a standard high enough to make it legally trustworthy. That is not now, of course.
Note that only a tiny percentage of contributions from Joe Keyboard types will be both provably true or false, and potentially libelous if false. The vast preponderance will instead be either opinions, thus not subject to defamation liability at all, anodyne, or indecipherable gibberish.
An online publisher will typically have a commercial bias toward publishing as many contributions as possible. Claims are nonsense that libel liability for online publishers would result in almost every comment getting turned away. There would be no need to do it, and it would be business folly for most publishers to operate that way.
Thank you for the substantive engagement. So much better. You will discover that for most of what I write, I anticipate questions, and have substantive answers in mind. Your objections to the length of my posts are partly owing to my habit to preclude obvious questions to begin with, or at least to try. Other times, I admit I try to explain complicated stuff I would have done better to avoid. I have learned only stubbornly how unwise it is to invest effort to explain at length to an audience predisposed not to like the subject.
Sigh. As I’ve explained many times, “otherwise” is not a word of similarity, so ejusdem generis doesn’t work that way. “Otherwise” means “in any other way,” not “in any similar way.”
(Also, your perceived irony isn’t even true; virtually any website aimed at the mass market that contains UGC does remove pornography. There’s an infinite amount of porn on the Internet, but the point of § 230 wasn’t to eliminate it, but to allow specific sites to do so.)
TX & FL can ban it from all publicly-owned networks and ban its use as a means of governmental communication.
I confess that this issue still irks me. The whole argument justifying Section 230 was that social media companies were not editorializing, but now the script has flipped so as to justify content moderation policies that sound in censorship of viewpoints.
That was not in fact “the whole argument justifying Section 230.” It was not half of the argument justifying Section 230. It was not any part of the argument justifying Section 230.
People lied to you. That’s not what 230 is for.
People lied to you. That’s not what 230 is for.
What a hanging curveball that is. But I will take the strike, lest I become unpopular with the pitchers.
Ironically, I found the attorney for Texas to be the most effective advocate despite having the least to work with. Very folksy and on-brand for Texas. He’s gonna lose of course, but I liked him.
(Other than Prelogar of course. She’s always gonna stand out.)
We must acknowledge Eugene as the day’s big winner! The “function” framework was pretty widely adopted, with personal attribution to him at least a couple times.
Poor Josh.
Has Prof. Volokh expressed a preference regarding a decision in this context?
I sensed he has been torn between partisan preference and a desire to avoid being known as a principle-deprived hack.
Biggest surprise: Jackson. Although that’s so frequently true, it shouldn’t really be a surprise anymore.
Second biggest surpirse, and a related one: Geofencing. A shockingly extensive discussion of the feasibility of Facebook et al simply abandoning the Texas market (and Florida by extension). This seemed to be aimed at making the justices feel better about scrapping the preliminary injunction while the (likely amended to be as-applied) challenges play out.
Technical, sure. But that virtue signal is strained when set against the half-assed excuse of operating in totalitarian regimes that they just “obey local laws”. Give up on that much of the population, and for what?
You corporations wanna grow balls? Tell Congress to FOAD when they mention changing 230 because you want your freedom to publish, or not, as you see fit, and not as they see fit.
The whole geofencing thing is idiotic. It’s a minor adjunct growing balls where government is permitting them to grow balls.
I think most local regulations require more moderation. No clowns in Brazil, or whatever. Ok fine.
To the extent Texas and Florida are requiring less moderation, that’s harder, if you think about it.
Tell Congress to FOAD when they mention changing 230 because you want your freedom to publish, or not, as you see fit, and not as they see fit.
Internet utopianism, succinctly put.
You demand publishing power greater than anyone in the world has ever enjoyed. It will not happen, because no one has power to deliver what you demand. Before failing, any attempt to deliver what you demand would dismantle the means necessary to publish at all.
More publishing power, and more expressive freedom, than folks enjoy now is possible, and potentially within practical reach. To get those will require discussing alternatives to Section 230.
Allowing the ISP oligopoly to censor whatever they want will destroy freedom on the internet but not allowing the platform oligopoly to censor whatever they want will also destroy freedom on the internet!
They lean on the banking system already to cut off unsavory but legal operations. Why not the power companies, too?
What’s with all the weasel ways to contro…oh, control. I get it.
I wouldn’t mind as much if they were at least consistent in either direction. Regulation up the wazoo or full Blade Runner free markets. Its this ‘whatever the progs coincidentally happen to like in every particular instance’ policy that I object to.
“I think that’s exactly right. If Fox News or the New York Times reject my content because they don’t like my views, that is not censorship, but the exercise of their own First Amendment rights. The same goes if Elon Musk bars me from posting on his site.”
And the same goes if the telephone company starts curating your phone calls! [/sarc]
That’s really the argument, isn’t it? That Facebook or X aren’t a newspaper, aren’t a publisher at all, but instead are something like a common carrier. They certainly were acting in that manner during their growth phases. Perhaps because pre-Section 230, that was the safest way to avoid liability for content? Remember, Section 230 wasn’t intended to secure freedom of speech, it was expressly intended to contract it by allowing platforms to get away with at least some forms of moderation that were legally perilous.
I’m not sure it’s a persuasive argument, or rather, that it doesn’t prove too much, but that really IS the argument.
Also, “I didn’t much like the content moderation policies of the pre-Musk management, and I like Musk’s policies even less.”
So, don’t leave us hanging: What do you like even less about Musk’s reduced ideological censorship?
The question is actually closer to something like Pruneyard.
It involves a private operation (a mall) that is acting as a public forum, and seeking to restrict certain types of speech on its grounds.
(https://en.wikipedia.org/wiki/Pruneyard_Shopping_Center_v._Robins)
I think that nicely fits what Facebook is. A private operation that is acting as a public forum.
Sure, but the role of the state (TX/FL/CA) is significantly different here: Pruneyard Shopping Center wanted to use state law to exclude people, and CA wanted to exclude them as a relatively narrow case. Internet goliaths aren’t trying to use state power in an analogous way here.
They are excluding “certain types of speech” in both cases.
For example in a follow up case to Pruneyard
“In 2007, the Supreme Court of California confronted the Pruneyard decision once more, in the context of a complex labor dispute involving San Diego’s Fashion Valley Mall and the San Diego Union-Tribune. On December 24, 2007, a 4–3 majority of a sharply divided court once again refused to overrule Pruneyard, and instead, ruled that under the California Constitution, a union’s right of free speech in a shopping center includes the right to hand out leaflets urging patrons to boycott one of the shopping center’s tenants”
The union members aren’t being excluded. It’s the Union’s free speech that is attempted to be excluded.
Yes — but my point was that the shopping center was attempting to use state power (trespass law) to exclude those speakers, the state wanted to disallow that kind of application of its trespass laws, and the state won at the Supreme Court. I agree with your desired outcome, I just don’t think the Pruneyard precedent gets us there. (The students in Pruneyard prevailed before the California Supreme Court on a state constitution freedom-of-speech claim, not a First Amendment claim, and other states with similar clauses have chosen to go the other way. It’s not a result that is required under the federal Constitution.)
Using trespass law to evict a speaker is simply akin to blocking (or shadow banning) a poster.
If the union members above were not actually trying to put out those pamphlets, there would have been no attempted ejection.
But I agree with you, the question in regards to the California First amendment (versus the Federal First Amendment) is a real one. It’s not “required” under the Federal First Amendment. But if Texas or Florida have similar Freedom of Speech laws to California, the same logic of Pruneyard should apply.
Facebook, Meta, etc. are not using anything like state trespass laws to bar speakers. They’re using the inherent functions of their own platforms. That’s where I see the key difference: they’re not trying to use FL or TX laws to block speakers, so FL and TX cannot make a decision to limit the application of their laws that is really analogous to what CA did.
If you absolutely needed a law, you could use the computer trespass laws as an analogy.
The platforms don’t need a law to censor users. Your theory needs them to need a state law.
It needs the laws to enforce the censorship, ultimately. Just like trespass laws do.
No, that’s completely wrong. If I want to remove someone from my physical property because I claim he’s a trespasser, I have to call the police and get them to haul him away. If Facebook or Twitter want to remove a user, they just go into their user database and delete the account. (Or make it inactive or whatever.)
That’s only partially correct.
If you want to remove someone your property, you can simply tell them to leave. It’s when they DON’T leave that and stay illegally, that you need to call the police. (Or you can remove them from the property via more forceful methods potentially) Alternatively, if they enter your property illegally, (then don’t leave), you need to call the police.
Likewise, Facebook tells the individual to leave. The individual may choose not to, and stay via illegal means. Then criminal (computer) trespass laws become necessary.
Michael P,
That’s an interesting, coherent argument distinguishing Pruneyard.
I agree with your result and much of your intuition that the virtual space is different from the physical space in Pruneyard, though I don’t sign on entirely.
PruneYard is a mall. A mall isn’t expressive. That was the problem PruneYard had. (And really, the conservatives are skeptical of PruneYard, which tells you how hypocritical they’re being.)
Facebook is expressive. You go to Facebook for content, not for panties.
The better analogy would be like an SCA event. It’s open to the public, but you have to be in character… and it has to be the right sort of character. They can kick you out if you come as like Spiderman or something. And definitely if you come as an anti-abortion protester or whatever was going on in PruneYard.
The parade in Hurley is a better match than the mall in PruneYard for this exact reason.
Ding ding ding.
Moreover, the mall owner in Pruneyard never argued that it objected to the leafletters’ message itself. (The mall was making more of a 5th amendment takings argument than a 1st amendment one, though it did raise the 1A also.)
Don’t read Pruneyard in 2024 terms.
Back in 1980, malls were were people went to see and be seen.
Concerts were held there: https://www.youtube.com/watch?v=w6Q3mHyzn78
And remember Tonya Harding? Her skating rink was at a mall.
Okay? Do you have an actual point that you’re trying to make? Or are you just proud of yourself for knowing what a mall is?
I think he’s proud of knowing who Tonya Harding is.
“Expressive” has never been a general legal concept for what constitutes a public forum, even a privately owned public forum.
As per Marsh v Alabama “”The more an owner, for his advantage, opens up his property for use by the {Page 42 Cal.4th 859} public in general, the more do his rights become circumscribed by the statutory and constitutional rights of those who use it.”
Indeed, the ultimate question is one of “private property that was open to the public in the same manner as public streets or parks could constitute a public forum for free expression,”
That was further expanded to Train stations, as “a railway station is like a public street or park. Noise and commotion are characteristic of the normal operation of a railway station. The railroads seek neither privacy within nor exclusive possession of their station. They therefore cannot invoke the law of trespass against petitioners to protect those interests”
In this particular case, Facebook is holding itself open to the public as a virtual forum, in exactly the same way that a park is held open to the public. Anyone can come in. Anyone can express an opinion. There is little in the way of ticketing, or categoric exemptions.
That in itself differs it quite strongly from the SCA example. The SCA example is quite definitive on the exemptions and classifications necessary for an individual to participate in its limited events (Which are typically time limited, unlike an open park or a virtual forum). There are strictly enforced provisions on wear and so on.
By contrast, Facebook is more like a public park. Anyone can come in. Anyone can comment. There are not categorical exemptions, no time limitations on the events, no fees, nothing.
The concept that if that a forum being “expressive” would LIMIT it from freedom of speech is paradoxical.
Facebook is holding itself open to the public as a virtual forum, in exactly the same way that a park is held open to the public.
This is just dumb. You don’t go to the park — or mall — or train station — for content. To the extent content is foisted upon you, you probably resent it.
A better analogy for Facebook would be an open mic night at a bar. Open to all, but content-based. And the bar owners sure as shit can curate that content. While each individual’s routine is that person’s speech, the overall set and experience is the venue’s speech product.
Many people do go to parks and such for such open discussions.
But perhaps you’ll find a different example more appropriate. There was a debate about President Trump’s Twitter account, and the comments section there. The courts declared the account was a public forum. A “virtual” public forum, with the same legal issues. (For the record, that was the district court, then a unanimous second circuit court ruling. It was declared moot by the SCOTUS, as Trump left office before a final ruling).
If an individual’s social media account can be a public forum, surely the larger platform can be one as well.
Wow, you are reaching for the thin spaghetti.
When Trump becomes the full owner of Truth Social and wins the presidency and starts using it for official announcements and as the place for citizens to petition the government for the redress of grievances, then we can talk about whether a social media platform can be a public forum.
This was already decided in two different courts. This wasn’t about Truth Social, but Twitter.
I know. And the case turned on the account being the president’s account and him using it for official purposes.
So for a whole social media platform to be a public forum, it would have to be the president’s social media platform and he’d have to be using it for official purposes.
Not “an individual’s” social media account. A government official’s social media account. “Public forum” is a term that applies only to government-created spaces.
“Public forum” is a term that applies only to government-created spaces.
As Pruneyard and Marsh v. Alabama demonstrated, this clearly isn’t true. You can have privately owned, public forums. This is simple caselaw.
Neither Pruneyard nor Marsh used the term “public forum.” (Well, the words do appear in Pruneyard, but only in a concurring opinion denying that the decision turned malls into public forums.) “Public forum” is a term of art in first amendment law.
Exactly this.
“That’s really the argument, isn’t it? That Facebook or X aren’t a newspaper, aren’t a publisher at all, but instead are something like a common carrier. They certainly were acting in that manner during their growth phases. Perhaps because pre-Section 230, that was the safest way to avoid liability for content?”
So … just to point out the obvious. Section 230 was passed in 1996, in reaction to a court decision.
Facebook was founded in 2004.
Twitter was founded in 2006.
Both companies grew in reliance on Section 230.
In fact, the entire history of the modern, consumer-facing internet that we know … from Web 1.0 onward, was after Section 230.
Other than that, as usual, batting 1.000.
(It is truly amazing how many people have such strong opinions on the law and the history of Section 230, yet seem to know nothing about either the law, or the history, of Section 230. Jus’ sayin’.)
Just to point out the obvious, they nonetheless didn’t begin aggressive moderation until they had achieved substantial market power. After the communications decency act passed, there were actually complaints that they weren’t taking advantage of the new law!
So, it might be more accurate to say they grew ignoring it, rather than relying on it. To the extent they we t e relying on it, it was just that codification of the established common law:
“No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.”
That’s …. Absolutely not true and completely ahistorical.
You are welcome to your own opinions, but you can’t base them on facts made up out of whole cloth.
Facebook has ALWAYS moderated. Always.
Seriously, take the L. If you’re in a hole, stop digging.
Did you perhaps read Brett’s post too quickly to process that he said “aggressive moderation” didn’t start right away, or are you just ignoring it because it would interrupt your… aggressive series of posts?
Brett vs Brian, who’s the more retarded?
After the communications decency act passed, there were actually complaints that [Facebook / Twitter] weren’t taking advantage of the new law!
The CDA passed eight years before Facebook was even founded, and 10 years before Twitter. Which you should’ve known if you’re purporting to have an opinion worth expressing. And which you especially should have known if it’s called out in the comment you’re replying to!
To the extent they were relying on it, it was just that codification of the established common law:
“No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.”
What does this even mean? I can’t make sense of it. Are you under the impression that interactive computer services are common carriers at common law? That’s the best reading I can come up with but still insane.
Did you perhaps read Brett’s post too quickly to process that he said “aggressive moderation”...
[Emphasis retard’s.]Oh, of course, now it all makes sense. The long lost second page of the Magna Carta! Ye olde interactive computer services shall be carriers common whilst engaging in moderation that fall not aggressively upon the good people, though if aggressive it be, scant privileges nor immunities regarding communications decency shall survive, even so should up to two hundred and thirty immunities crumble at once in this manner.
Great take down. If this doesn’t humble them both for just making up facts to support their pre-existing opinions, nothing will. But I’m betting nothing will.
But this is really, really embarrassing for them both.
If anything Facebook started moderating LESS, particularly about the time all that Russian disinfo on behalf of Trump started flooding in. They’re *supposed* to be able to spot and act on bots and scams.
‘reduced ideological censorship?’
Mostly it’s led to the site being flooded with bots and Nazis. Pre-Musk moderation gave us fewer Nazis and only occasional bots and scams, and though at the time people complained bitterly about them, oh boy, they had no idea. Oh, also Musk bans, blocks and silences people all the time – journalists, critics, whatever. He tried to silence Navalny’s wife until everyone started shouting about it, just so we know where he’s coming from, ideologically. So your and Musk’s version of reduced ideological censorship involves way more censorship than before, but with extra bots, scams and nazis.
Conservatives are so laser-focused on their own hugely inflated sense of vctimhood while not giving a shit about larger pictures, probably because most of their *valid* complaints aren’t actually a result of partisan targeting but systemic failures that affect everybody.
Being flooded with what YOU would call ‘NAZIS’ IS “reduced ideological censorship.” You just think half the population are ‘NAZIs’, because they don’t agree with you. But even if they WERE NAZIs, it would still be reduced censorship: Censorship doesn’t stop being censorship just because you approve of it.
No. Censorship stops being censorship when it’s not the government doing it.
If I kick a Nazi out of my dinner party, I’m not censoring them. I’m curating my dinner party by not having Nazis.
On the other hand, when the government keeps the Nazi from talking about how awesome being a Nazi is, they are censoring him.
If the Nazi isn’t allowed at one dinner party, then he should go to a dinner party where Nazis can talk about their Nazi stuff.
Clarence Thomas, puzzlingly, could not figure that out. “I demand that you call this censorship so I can say that it’s bad” was literally his position yesterday.
Clarence Thomas never said the word “censorship” yesterday.
You’re right; I unfairly maligned him from memory. It was Alito who gave us that idiotic line of questioning.
Nieporent, whoa, wo, wo, wo . . . ! Are you backing off now from your insistence that private editing prior to publication is censorship?
No, because I’ve never said that. What I’ve said is that when that “private editing” is done at the point of a gun held by the government, it is not “private editing” at all.
It’s I assume uncontroversial that if an FBI swat team showed up at NYT headquarters, pointed guns at the EIC, and said, “Don’t print this story about Biden or we’ll shoot you/haul you off to jail/seize the paper’s assets,” and the EIC immediately logged in to his computer and took the story out of tomorrow’s paper, it would be censorship, not mere “private editing.”
It’s no less the case that if the state of Alabama said to the NYT, “Don’t print a story saying that the police chief is violating the rights of civil rights protesters or we’ll fine you $500,000,” and the NYT said, “Okay,” and didn’t publish the story, that this would be censorship, not mere “private editing.”
And telling Twitter, “If you permit any of your users¹ to say that John Eastman is an insurrectionist who betrayed the country, then you will have to pay hundreds of thousands of dollars,” and Twitter prevents anyone from saying that to avoid this fine, would also be censorship, not mere “private editing.”
¹And of course by users I mean the normal human definition that everyone uses — people who post on the site — not the Lathropian definition of advertisers.
If private parties in the aggregate have enough power to affect one’s ability to effectively speak, it can be argued they are censoring and the government can lawfully stop that censorship. However, if the private party impacted is expressing their own viewpoint in that censorship, the government is censoring the censor. Things get messy when all the private parties gang up on one message.
“If private parties in the aggregate have enough power to affect one’s ability to effectively speak”
So, think back to the day when there was one newspaper in town. Now, that newspaper didn’t publish your letter to the editor. They had the power to affect your ability to speak … in a way. But that wasn’t censorship.
Same here. Yes, many social media platforms allow people to amplify their voices. But that doesn’t mean you still can’t speak. Just because you want to leverage a private publisher’s reach, doesn’t mean you can’t speak, or that they are censoring you.
It really isn’t that hard.
(Now, I will leave aside issues about whether or not the ability to completely remove someone entirely from the internet would be different, but that’s not the discussion we are having.)
I lean towards the side of the social media companies, but I nonetheless appreciate there is some censorship going on when you have no ability to amplify your message when private parties gang up on you.
So yes, there is censorship in your one-newspaper town, particularly when it is a large city and there was no alternative way for you to reach that large an audience. But certainly, the government cannot censor the censor.
Better yet, I’m with general Prelogar:
I mean, at a certain point I think that words matter.
When a parent tells his child not to curse, are they censoring him?
If you say yes, then great.
Me? I reserve the term for actual government (state) action. Because otherwise, we lose a useful distinction between state action, and the normal rough and tumble that exists between private parties.
Given that the traditional meaning is associated with the term for the state actor who would suppress politically charged speech, I prefer to keep that distinction.
Also, reach. Private editing in a one-newspaper town may inconvenience distribution of particular content. Private editing at the newspaper is powerless to prevent publication of particular content, if advocates promoting the content remain active and determined. Government, on the other hand, can in principle prohibit the content not just in that town, but nationwide. Very different animals.
A related consideration would point toward the problem of giantism among social media platforms, and the effect of that to over-empower private actors to control content. Advocacy in contrast would sensibly demand government policy to promote profusion and diversity among a myriad of smaller private publishers, so none enjoyed excessive power to control content, and would-be contributors frustrated by one publisher would readily turn to others more congenial.
However, there does seem to be a stubborn tendency among internet-based factions to gravitate toward the big quasi-monopolistic models. They do that apparently in the hope that government can be induced to order big platforms to cooperate with this or that set of factioneers.
Every employer potentially has the ability to wield power/influence over every employee’s speech. And that could be censorship. Just for one example. That doesn’t mean it is or should be illegal, of course.
No, there is all kinds of private censorship. It’s different than government censorship of course, but it’s still censorship. Censorship isn’t a legal term either.
Just because you have nazi-blindness doesn’t mean anyone else does. Praising Hitler, anti-semitic memes, the Jews flooding the US with brown people to repalce the whites, it’s all there and so much more. Musk even occasionally retweets or comments approvingly.
‘But even if they WERE NAZIs, it would still be reduced censorship: Censorship doesn’t stop being censorship just because you approve of it.’
Nobody wants fucking nazis hanging around their online community. Fucking nobody. If you own a social media site and you let nazis in, it will go to shit. As per the example of X. If you’re arguing for letting nazis in because you disapprove of censorship, you are just arguing in favour of nazis, not against censorship. Banning nazis is no more censorship than banning scams and bots. Only people who are in favour of nazis want nazis.
Brett, it IS possible for a telco to censor your speech.
They were never acting that way. They always always always held themselves out as creating a curated experience. There are plenty of services, like (to use something that came up yesterday) gmail, or instant messaging, that hold themselves out as private communications tools. Or the phone company, to pick up on your post. Mere conduits, to use a term that was bandied about yesterday. But not the social media platforms that people care about. They serve a different purpose and work in a different way.
Well, no, because none of the social media companies existed pre-Section 230. Not even MySpace.
That’s a tendentious way of describing it, but it is correct that Section 230 was not about “freedom of speech” per se; it was about promoting the growth of the Internet.
.
The big difference is phone calls are private, one-to-one, or occasionally, very small conference calls. Most usage of Twitter and Facebook is one-to-many (though one-to-one is supported, it’s not what anyone really cares about censoring).
Section 230 was from 1996 — far predating the existence of any social network. That legal regime has no bearing at all on their early practices. What likely does have bearing on their early practices is that sites early in their development have little ability to censor.
That’s really the argument, isn’t it? That Facebook or X aren’t a newspaper, aren’t a publisher at all, but instead are something like a common carrier. They certainly were acting in that manner during their growth phases. Perhaps because pre-Section 230, that was the safest way to avoid liability for content? Remember, Section 230 wasn’t intended to secure freedom of speech, it was expressly intended to contract it by allowing platforms to get away with at least some forms of moderation that were legally perilous.
>>>>>>>>>>>>>>>>>
Short of a legal fief, social media effectively is a common carrier in pretty much any important aspect you can list. Its clearly controlled by an oligopoly and has evolved into a vital basic service important the the livelihoods of an extremely large portion of the population. Theres no honest way to argue otherwise. If the left didn’t think its important they wouldn’t be losing their minds all day to this day over Elon and Twitter.
Short of a legal fief, social media effectively is a common carrier in pretty much any important aspect you can list.
No. Social media curates content, to tailor an audience, to monetize the audience, by sale of access to the audience to advertisers, who pay the bills for all that activity. Which is exactly and precisely a classic publishing business model, and unlike a common carrier business model in any particular.
They “curate” to the extent that they have an algorithm that takes my input of what I want to see and matches it with an input of what someone else has already said. They “tailor” to the extent that they have a trending page, have flagged keywords, and haphazardly decide what content is and isn’t acceptable.
It is not comparable to any publishing model because no such model has infinite space and practically no access restrictions. Twitter/X, for instance, has no idea what the vast majority of its content even is and so the notion that they somehow present a controlled, coherent message like a newspaper is absurd.
Jacob Grimes, Twitter/X was a bad choice of example. Its mis-curation is famously pushing it toward failure.
Also, social media companies do not, “have,” algorithms. They create and tune algorithms, purposefully, to pay the bills for your online amusement. You are not the user of these platforms. As with other vendors, the users are the customers who buy what the vendor has for sale. In the case of social media platforms, the users are the advertisers who buy access to the attention of the audience the publisher curates. You have the honor to be the product for sale.
We can also speak more generally, in terms of expressive freedom. In the public square, free speech comes free of cost. Nobody bills you per syllable to open your yap. Press freedom is different. It is inherently expensive. Facebook, for instance, incurs annual operating costs more than an order of magnitude greater than those of the NYT. Thus, press freedom does not come free, it comes at considerable expense. Yet social media devotees, who mistakenly style themselves users, pay not a nickel for their entertainment. They do not need to pay, because their attention is the product for sale.
All of that closely resembles the business models relied upon for decades prior to the internet by America’s institutional press. None of it resembles the business models relied upon by common carriers, either now or previously.
Never forget, if the business models which support press freedom do not work, press freedom will go away. Trying to adjudicate actual press freedom cases without close attention to business models is unwise. Section 230 was an enormously consequential policy blunder by Congress. Now the Supreme Court gets its opportunity. Let’s hope they do better.
A very long-winded way of saying that Twitter, etc. has the same business model as a newspaper because it has readers and advertisers. At least that’s what I think you were trying to say, because saying a Facebook user isn’t a Facebook user doesn’t make a lick of sense.
Not a lick of sense to you maybe. Facebook fans are important as the product for sale. The users who buy the product are the ones paying the bills for the ones you like to think of as the users. The only reason you should keep that in mind I mentioned earlier in a previous comment. Speech freedom comes free. Press freedom is different; it is inherently expensive.
Without business models enabling publishers to pay their costs, press freedom disappears. That is a very real and too-little-noticed constraint on what law or public policy can do to manage the press. Potential for a policy blunder or a legal blunder which would inflict damage on press freedom is thus ever present. Arguably it already happened with passage of Section 230—accounting for why so many folks are so angry about social media today.
They should be doing a damn sight mpre than just letting an algorithm run. They should be dealing with reports of threats, harassment, porn, deepfakes, hacked or stolen data usually in the form of photographs, bots, scams, actionable statements, and so on. THAT is what is meant by ‘curated’ experience – having to put up with a minimum of all of that. People get *mad* when they keep pushing the algorithm on them instead of letting them control their own feed.
Short of a legal fief, social media effectively is a common carrier in pretty much any important aspect you can list.
This is so dumb, not even the conservative justices were buying it.
There’s a really obvious difference between social media and common carriers which users of social media are totally aware of. When you send a tweet or post a video or a picture of your cat on social media to the public at large, you’re not sending it “to” anyone in particular. You’re sending it to “everyone.” But not literally everyone, because that doesn’t make any sense at all. Everyone isn’t going to see every tweet or video or cat picture. So really you’re sending it to… all the people that the social media platform thinks should see it. Built into the very action of communicating on social media is the understanding that the social media platform is going to be making moderation decisions about who gets to see your thing. Nobody is remotely confused about that.
There’s another word for sending content to everybody: publishing. Publishing has always had an editorial component, exactly because sending something to everyone (aka publishing) is a qualitatively different activity than sending something directly to one or more recipients (aka common carriage).
That’s Lathrop’s favorite formulation, but I think “distributing” is a better framework than publishing. (Though Zeran treats distribution as a subset of publishing for liability purposes.) Publishers typically screen in advance, while distributors (bookstores, newsstands) typically do so after the fact. It doesn’t change your point that social media companies are not common carriers, though.
media to the public at large, you’re not sending it “to” anyone in particular. >>>>>>>>>>>>>>>>>>>>>>
almost all social media consists partially or entirely of direct messaging to people you explicitly choose.
>>>>>>>>>>>>>>>>>>>> . Built into the very action of communicating on social media is the understanding that the social media platform is going to be making moderation decisions about who gets to see your thing. Nobody is remotely confused about that. >>>>>>>>>>>>>>>>>>>>>
Okay so if any given provider decides to curate messages that automatically gives them a blank check to censor anything they please? So if ISPs wanted to censor all socialists. They could say ‘Hey we’re going to censor socialists from now on’ and they would now be free to do so and if you’re a socialist who wanted an internet presence you’re SOL?
If this is the way things should be why do giant international megacorp oligopolies get to enjoy this totally necessary freedom but not tiny small business cake shops? Is SSM cake from every single cake shop in the country a more crucial resource than access to internet communication forums?
‘and they would now be free to do so and if you’re a socialist who wanted an internet presence you’re SOL?’
You can complain, you can object, you can spread the word, look for support, get more people to join you in objecting and complaining. You can write articles, make videos, send out media packages. Suddenly the internet is abuzz about it. If the PR doesn’r make them change their policies, then there really is nothing that can force them to. Yes, the fact that corporations, or just individual billionaires can exert that kind of power is disturbing. Welcome to 2024.
Alternatively, folks could pressure Congress to repeal Section 230, which would solve the (so hypothetical no solution yet needed) problem.
This would solve exactly no problem, let alone the hypothetical one he postulates.
It would solve the problem of this not being the 1980s when Lathrop was last relevant.
I don’t think that would do anything to solve the problem. That would make the problem worse.
Nige, do I understand you to say that a myriad of smaller independent publishers, competing mutually for every viable niche of the opinion market, would not improve the problem of an opinion market dominated by a few giantistic platforms—with publishers at liberty to pick or reject contents at pleasure in either case?
almost all social media consists partially or entirely of direct messaging to people you explicitly choose.
You’re obviously a fart of a certain age.
Yes, there’s direct messaging functionality out there. But Texas and Florida were super clear that that’s not what they were worried about (except to the extent it might prevent the preliminary injunction from being upheld).
So if ISPs wanted to censor all socialists.
Where did ISPs ride in from? We were talking about social media, in particular the non-DM features of social media. And yes, they could censor all socialists. Truth Social already does.
If this is the way things should be why do giant international megacorp oligopolies get to enjoy this totally necessary freedom but not tiny small business cake shops?
Tiny small business websites do get to enjoy this totally necessary freedom as we found out in 303 Creative.
Tiny small business cake shops aren’t in the speech business. They’re in the cake business. Duh.
If you think Twitter is a “vital basic service”, you haven’t looked at how many people actually use Twitter.
It gets disproportionate attention because celebrities and politicians use it disproportionately, but most people don’t use it and only hear about it second or third hand.
Section 230 provides:
In this case, however, the social media companies explicitly ask the Court to treat them “as the publisher or speaker” of third-party content on their sites. They simultaneously argue that third-party content is their speech and that it is not their speech, depending on whichever position suits their position in a given case.
If the Court does rule for the social media companies, that third-party speech on their sites is in fact their speech, to censor as they please, then traditional publishers should sue the government on Equal Protection grounds over the special immunity social media companies receive under Section 230, but they do not.
Section 230 also says:
The problems with how courts have applied this are that “or otherwise objectionable” is improperly treated as infinitely flexible (as Brett mentioned above) and also that “in good faith” is read out of the requirements for immunity. Companies acting in bad faith have been held immune to liability.
What company was found to have acted in bad faith but was immune to liability anyway?
You do know that SCOTUS tossed most of the CDA, don;t you?
What company was found to have acted in bad faith but was immune to liability anyway?
Another bit of accurate insight into what is actually going on. These bits are beginning to accumulate. No sign that they have yet had any effect on internet utopian thinking, however.
In this case, however, the social media companies explicitly ask the Court to treat them “as the publisher or speaker” of third-party content on their sites. They simultaneously argue that third-party content is their speech and that it is not their speech, depending on whichever position suits their position in a given case.
If the Court does rule for the social media companies, that third-party speech on their sites is in fact their speech, to censor as they please, then traditional publishers should sue the government on Equal Protection grounds over the special immunity social media companies receive under Section 230, but they do not.
Disagree.
It’s their site, they have the right to control what appears on it, but practically speaking they can’t be expected to be liable for every word that appears on it.
There’s only two reasons this is remotely controversial:
1) On the left, there was a perception that harassment and extremism was getting out of hand and there was a question if the sites should be forced to do more w.r.t. moderation.
2) On the right, a number of prominent voices engaged in harassment and extremism until they got banned, so there’s a question if sites shouldn’t be allowed to moderate/ban.
If rational-basis review applied, how could the traditional publishers win their case?
Social media companies do not receive special immunity under Section 230. Every Internet site with user-generated content gets exactly the same protections. Social media companies, Reason’s website and the Volokh Conspiracy in particular, your aunt’s knitting blog (if it has comments), “traditional publishers” with respect to the user content they host (i.e., comment sections), etc.
The oral hearing was interesting but unsatisfying. No participant addressed the operative statutes of Title 47 (Telecommunications).
The FCC does not define a service to be a common carriage service. The common law definition of common carriage determines whether a service is common carriage. The FCC decides whether a communications-related common carriage service is a telecommunications service that the FCC should regulate. When the FCC makes a determination of its regulatory authority, it applies the following definitions: § 153 (11) Common carrier, § 153 (24) Information service, § 153 (50) Telecommunications, § 153 (51) Telecommunications carrier, § 153 (52) Telecommunications equipment, and § 153 (53) Telecommunications service. When the definitions are applied together, they state that: (a) § 153 (11) Common carrier defines a communications common carrier, (b) § 153 (51) Telecommunications carrier defines a telecommunications carrier, and (c) if a telecommunications carrier is a common carrier, it is a communications common carrier, (d) but not every communications common carrier is a telecommunications carrier.
Every social medium platform provides a service of common carriage of messages. For that service, the backend server is like a letter satchel of a letter carrier. A message in the backend server of a social medium platform is bailment of the social medium platform not speech of the social medium platform just as a letter in the satchel is bailment of the USPS not speech of the USPS. The USPS only has limited legal ability to deny common carriage of a letter. A social medium platform should have only legal limited ability to deny common carriage of a user’s message.
A social medium platform must obey the following statute.
47 U.S. Code § 202 – Discriminations and preferences
(a) Charges, services, etc. It shall be unlawful for any common carrier to make any unjust or unreasonable discrimination in charges, practices, classifications, regulations, facilities, or services for or in connection with like communication service, directly or indirectly, by any means or device, or to make or give any undue or unreasonable preference or advantage to any particular person, class of persons, or locality, or to subject any particular person, class of persons, or locality to any undue or unreasonable prejudice or disadvantage.
(b) Charges or services included Charges or services, whenever referred to in this chapter, include charges for, or services in connection with, the use of common carrier lines of communication, whether derived from wire or radio facilities, in chain broadcasting or incidental to radio communication of any kind.
(c) Penalty Any carrier who knowingly violates the provisions of this section shall forfeit to the United States the sum of $6,000 for each such offense and $300 for each and every day of the continuance of such offense.
47 U.S. Code § 202 forbids discrimination by locality, and no party to the oral hearing seemed to realize that the US Internet belongs mostly to the government and to the public.
A social medium platform is highly subsidized by the public and by the government.
So let’s tax them, 50% tax on advertising.
Her name is Laken Riley
His name is Ricky Shiffer
If I were the Justices here, I would use Pruneyard as a model, because that most clearly fits the facts that are going on currently. (https://en.wikipedia.org/wiki/Pruneyard_Shopping_Center_v._Robins)
-A mall can act as a privately owned, public forum.
-Just like Facebook is acting as a privately owned, public forum.
-If you have individuals screaming racist threats in a mall, mall security can (and will) remove you.
-Just like Facebook can remove racist messages that it finds violate the terms of service.
-If someone says something libelous in a mall, the mall isn’t liable
for it. The speaker is.
-Just like is someone says something libelous in a Facebook forum, Facebook isn’t liable, the speaker is.
–And under Pruneyard, certain types of speech cannot be prohibited by a mall.
-Just like certain types of speech should not be able to be prohibited on virtual public forums.
The analogies between a privately owned, public forum, and a privately owned, “virtual” public forum are pretty clear here.
I’m not a fan of Pruneyard, but I do think your analogy is appealing.
And the US S. Ct. upheld a CA decision allowing speech in a public forum (mall) hosted by a private company, because like the TX constitution, CA contains an AFFIRMATIVE command to free speech (vs the negative command of the 1st Amendment). TX: ‘….Every person shall be at liberty to speak, write or publish his opinions on any subject, being responsible for the abuse of that privilege; and no law shall ever be passed curtailing the liberty of speech or of the press. …’ TX and CA are both allowed to provide their citizens with greater protections than 1A. It’s totally BS to think private internet companies hosting public forums can prohibit speech at their whim. That’s the law of the land.
This might not affect the liability question, but one thing online forums like Facebook do that a mall doesn’t is boost speech, even libelous speech (and sometimes especially libelous speech) as part of its business model, so it seems disingenuous to say that they are neutral about speech, even though it also doesn’t seem to qualify as editorial as in a traditional publisher. There’s also a difference in primary purpose: a mall’s primary purpose is to serve as a marketplace. Speech is incidental. The primary purpose of a Facebook is to encourage speech, select speech that gets traction, and to boost it in order to encourage engagement, which increases ad revenue.
Again, it doesn’t seem that the distinction makes a difference to this case or changes anything, but the analogy has to me a significant difference.
You’re wrong: https://reason.com/volokh/2024/02/27/supreme-court-seems-likely-to-strike-down-florida-and-texas-social-media-laws/?comments=true#comment-10464571
I’m Right. You’re wrong. :p
All this discussion proves is that current efforts to regulate social media requires coming up with a workable classification or model of what it is for legal purposes, which mostly involves trying to smash a square into three or four round holes. It’s absolutely futile. The EU doesn’t seem to be doing much better fwiw.
Which is remarkable, Nige, because if you present social media to a publisher-shaped hole, it slides in without difficulty.
One part of it certainly does. Then it gets stuck when everything else tries to get through.
Nige, can you say more about what gets stuck? This could be a productive discussion.
Communities. Communities get stuck. They’re not writing books or articles to each other, they’re writing letters.
Nige, are you talking about private letters, person-to-person, or published letters, one-to-many? Are you talking narrow geographic scope, just in my neighborhood, or world-wide scope? Are you talking about ephemera, or are you talking about durable records, recoverable for years by online search?
Will Republicans now support Net Neutrality? It is much easier to say that Comcast is a common carrier then Facebook.
Check out Kavanaugh’s final impromptu question to General Prelogar to find your answer.
Hint: no.
I agree with the fairly obvious public vs. private distinction identified by the apparent majority. It isn’t the legality of the private censorship that disturbs me, it is the providers’ lying about it.
In prior posts, I suggested that Section 230 be amended to require disclosure of a platform’s criteria for removal, and that if it is shown that the platform failed to follow its own rules, it would lose Section 230 liability.
One poster here thought it would be too easy to get around by making vague rules. Perhaps specific disclosures (answers to specific questions) could be required.
But I agree with you, there is a smell of bait-and-switch and hidden agendas in much of how the platforms operate de facto.
.
The Volokh Conspiracy does not like the way you think.
Says you. Bet more agree with my posts than your juvenile rantings.
More what?
More of the disaffected, antisocial right-wingers at this white, male, bigot-hugging blog, or more Americans (especially educated, reasoning, modern, successful, mainstream Americans)?
The point was that the Volokh Conspiracy doesn’t want to talk about its viewpoint-driven, partisan censorship or the ostensible “rules” it claims to be enforcing with its censorship.
The Conspirators and their fans prefer to pretend that the censorship doesn’t happen and to cling to silly claims this blog is a free speech champion.
Has anyone argued that Section 230 pre-empts these laws? Its express purpose was to encourage ISPs to censor some, but not all, content. That seems to be in tension with state laws that limit censorship.
Bored Lawyer probably means ICS (Interactive Computer Service). I disagree with the characterization of Section 230.
Yes, but you’re a fucking idiot.
The plaintiffs did in fact make those arguments in these cases, but that wasn’t the issue that SCOTUS took up the cases to address.
Odd. Under the doctrine of Constitutional avoidance, I would think that should be addressed first.
There was a pretty good explanation at one point of why this 230 pre-emption argument didn’t hold up. It essentially said… if I can remember… something like:
We can assume these laws aren’t totally preempted by 230 because the states wouldn’t have felt the need to pass any laws if 230 were already doing it all. So the states intended them to go further than 230. How much further doesn’t really matter.
Bored Lawyer, yet another fragment of correct insight. If yet more commenters keep coming up with these little gems, maybe this discussion can unstick itself and become constructive.
Does anyone really buy the “curated forum” line or the “it’s just like not getting my piece picked up by Fox News” arguments? Or that hosting certain speech is anything but a negligible burden (how many bytes is 147 characters?)?
The best arguments against the laws are based in law and principle. These policy considerations are utterly unpersuasive to anyone who’s actually used social media sites.
Yes, I absolutely believe it.
Easy example. You’re facebook. You make money by selling ads to advertisers. At a minimum, you want to be able to tell advertisers that their ads will not appear on a platform with pornography.
You curate your platform so that your brand (facebook) is not associated with that type of public-facing content, because that’s how you monetize it.
You can keep going, with different types of content.
If you want to post your porn (or other types of content), then you just go to … well, whatever the new 4chan is, which doesn’t have the same concern about curating their content.
Your argument is nothing more than, “All places on the internet have to accept all speech, no matter what.” Which is, well, it’s certainly something! But there are a lot of different places that have heavy moderation, and that succeed because of that. GIGO, you know?
A common carrier of messages could define multiple tiers of service like the movie code.
A common carrier of messages could define the message type that is fit for the common carrier to transport, e.g., a message that is appropriate for an elementary schooler.
Do you think advertisers on Twitter are okay with people hawking used underwear? How about foot fetish pics? Foreign terrorist groups posting propaganda? Because all that stuff was easily accessible long before Musk took over and still is. It is astounding how much depravity you can find there that technically isn’t illegal, racist, or obscene. Advertisers didn’t care then and still don’t.
Second, the “brand association” concept doesn’t make sense if you think about it for more than a few moments. How exactly do consumers notice ads next to “objectionable” content? First, they have to SEEK OUT the objectionable content. And if the only way people are going to see the ads is by voluntarily seeking out that content, why are they going to be mad at an advertiser for being peripherally associated with something that they themselves are interested in?
“Do you think advertisers on Twitter”
As has already been pointed out, Twitter/X is not your best example. We have already seen that less (still more than ABSOLTELY NONE) content moderation is absolutely anathema to mainstream advertisers.
“Second, the “brand association” concept doesn’t make sense if you think about it for more than a few moments.”
Yeah, I have thought about it for more than a few seconds- clearly, you haven’t.
The internet provide a free market. If you don’t like a particular app, website, or platform, use a different one. There are a lot of places to speak. Find one that you like, instead of demanding that all private parties carry your message. That’s not how it works.
“As has already been pointed out, Twitter/X is not your best example. We have already seen that less (still more than ABSOLTELY NONE) content moderation is absolutely anathema to mainstream advertisers.”
Ok, you say that, but like I said, plenty of horrible stuff advertisers don’t like was already on Twitter before Musk took over and those advertisers were mainstream (Apple, Disney, etc).
“Yeah, I have thought about it for more than a few seconds- clearly, you haven’t.”
Compelling stuff, here.
That’s because you don’t actually understand what you’re talking about.
Facebook is a brand. It provides a certain curated experience. That experience is different than TikTok. It is different than Twitter. It is different than Snapchat. It is different than Instagram (yes, I know IG is owned by FB). It is different that Reddit. It is different than Tumblr. It is different than LinkedIn. It is different than the comments section on a financial blog. It is different than a hobbyist forum. And so on, and so forth.
These places make money from advertising. There is a free market- advertisers choose where to spend their ad dollars. Maybe they want to spend it on FB. Maybe on twitter. Maybe on the comments section of a financial blog. And those different places curate the content that they publish to ensure that they are attractive to advertisers.
You also have choices- you can publish your speech on one of those curated platforms, and you can choose which one! I mean, you have to abide by the private party’s rules and regulation if you want to leverage their publishing ability for your speech.
Or, you can make your own website, and publish there. It’s almost like this is how things are supposed to work.
On the other hand, you can just whine and demand that private entities carry your speech, with the threat of government compulsion. Weird, I feel like there is some …. thing … about the government compelling private entities to publish your speech as part of their curated content.
But hey- clearly you’ve thought about this a lot, and I haven’t. Good for you!
You’re just saying they have a specific brand without describing what it actually is (because it doesn’t exist in any meaningful sense due to the huge number of users and types of content available), insisting that they curate without addressing obvious counterexamples of them not curating, and then writing a bunch of paragraphs about the free market.
That seems exactly what someone who has no clue what they are talking about would write.
Wow. You really are a fan of beclouding yourself, aren’t you?
Social-media platforms aren’t “dumb pipes”: They’re not just servers and hard drives storing information or hosting blogs that anyone can access, and they’re not internet service providers reflexively transmitting data from point A to point B. Rather, when a user visits Facebook or Twitter, for instance, she sees a curated and edited compilation of content from the people and organizations that she follows. If she follows 1,000 people and 100 organizations on a particular platform, for instance, her “feed” —for better or worse—won’t just consist of every single post created by every single one of those people and organizations arranged in reverse-chronological order. Rather, the platform will have exercised editorial judgment in two key ways: First, the platform will have removed posts that violate its terms of service or community standards—for instance, those containing hate speech, pornography, or violent content. Second, it will have arranged available content by choosing how to prioritize and display posts—effectively selecting which users’ speech the viewer will see, and in what order, during any given visit to the site.
Accordingly, a social-media platform serves as an intermediary between users who have chosen to partake of the service the platform provides and thereby participate in the community it has created. In that way, the platform creates a virtual space in which every user—private individuals, politicians, news organizations, corporations, and advocacy groups—can be both speaker and listener. In playing this role, the platforms invest significant time and resources into editing and organizing—the best word, we think, is curating —users’ posts into collections of content that they then disseminate to others. By engaging in this content moderation, the platforms develop particular market niches, foster different sorts of online communities, and promote various values and viewpoints.
‘Ok, you say that,’
The advertisers were saying that.
‘How exactly do consumers notice ads next to “objectionable” content? First, they have to SEEK OUT the objectionable content’
It’s the internet. If it’s there someone will find it and point to it. There more of it there is there more there is to point to. This isn’t a small thing, or an irritation, it’s how social media works.
You’re not making a legal argument; you’re just saying you don’t think it’s a good law. But that is a political argument to be decided by the people’s elected representatives, not the courts.
A state, for example, might pass a law prohibiting employers from barring their employees from wearing clothing bearing political messages. Companies might protest that this might scare away some customers or clients. That may very well be, but that wouldn’t make the law unconstitutional. These laws are no different from any anti-discrimination laws that a state might (or might not) choose to pass.
Well, I was asked a specific question- as to whether or not anyone believe it. I answered that.
You’re asking a legal question. And to me (putting aside Section 230), the legal question is pretty simple as well. I think that this falls squarely within Miami Herald Publ’g Co. v. Tornillo.
“Appellee’s argument that the Florida statute does not amount to a restriction of appellant’s right to speak, because “the statute in question here has not prevented the Miami Herald from saying anything it wished,” [Footnote 21] begs the core question. Compelling editors or publishers to publish that which “reason’ tells them should not be published” is what is at issue in this case. The Florida statute operates as a command in the same sense as a statute or regulation forbidding appellant to publish specified matter. Governmental restraint on publishing need not fall into familiar or traditional patterns to be subject to constitutional limitations on governmental powers. The Florida statute exacts a penalty on the basis of the content of a newspaper. … Even if a newspaper would face no additional costs to comply with a compulsory access law and would not be forced to forgo publication of news or opinion by the inclusion of a reply, the Florida statute fails to clear the barriers of the First Amendment because of its intrusion into the function of editors.”
Pruneyard is indeed the strongest case for these laws, but (a) Pruneyard was wrong; (b) Pruneyard is distinguishable; and (c) there’s no way that this court would come down the same way if Pruneyard came before it today.
Tornillo does do basically all the work needed to reject these laws. It rejects every one of the arguments people use in favor of them. Including — as I noted the other day — the specious “monopoly” argument.
I like the opening of Justice White’s concurrence in Tornillo:
(emphasis added)
And, by the way, as soon as you start making analogies to things that have absolutely nothing to do with the issue at hand, you show that you have a lack of understanding as to what the actual issues are.
Analogies can simplify things, but they should at least be … I don’t know … helpful?
So how is this different than a hypothetical state law prohibiting telephone companies from cutting off access to service due to the content of their user’s speech on the telephone line?
The telephone companies aren’t producing a content-based product.
My turn. Why isn’t this the same as a real state law prohibiting a wedding website designer from censoring gay couples from her platform?
It would, if the designer was hosting a publish-your-own-wedding-web-page service.
303 Creative was decided the way it was because of the state of Colorado’s factual concessions.
Firstly, anti-discrimination laws facially regulate conduct, not speech. They might impermissibly regulate speech (e.g., in 303 Creative) applied to some cases. In contrast, The laws at issue in this case regulate speech.
The analogy to employee speech has some merit. As noted in the OP, the Florida law may be permissible as applied to Uber because the law’s regulation of speech is not Uber’s speech (Uber is not an expressive business). Perhaps it is the case a law regulating employee speech is not a regulation of the employer’s speech.
However for your analogy to work for a social media company, you have to assume the Florida and Texas laws don’t regulate the company’s speech. But, that’s the basic question debated at orals and your analogy’s assumption doesn’t help answer it.
However for your analogy to work for a social media company, you have to assume the Florida and Texas laws don’t regulate the company’s speech. But, that’s the basic question debated at orals and your analogy’s assumption doesn’t help answer it.
We already have the precedent of common carriage laws as applied to telephone companies, which forbid them from refusing to transmit speech merely because they do not like what is being spoken.
Let’s set aside the very different function and purpose of telephone companies vs. social media companies, and answer me this: what makes you think the phone companies oppose being common carriers? Maybe it does violate the phone companies’ 1A rights to “forbid” them from refusing to transmit objectionable speech. But if they don’t have any interest in doing that anyway, then the legal issue won’t arise.
That too gets us back to the basic question: are social media companies more like telephone companies, newspapers or bookstores (or name your analogy).
JoshR, here is some advice on how to resolve your basic question. Look to what happened as social media companies grew, thrived, and began to outcompete other businesses for revenue to pay their bills. Which class of competitors suffered most. It was newspapers, hands down.
The market you are in, and the kind of business you practice, will be revealed by which competitors you affect.
How do you distinguish Facebook from verizon?
After all, by NetChoice’s argument, verizon has a First Amendment right to block people who use their communications infrastructure to spread Communist ideas.
and what about telephone companies at the height of the Civil Rights movement? Again, common carriage laws were already in effect, so I doubt any of them denied service to civil rights activists because of their message. But the arguments to the Supreme Court implies that these telephones had the First Amendment right to cut off telephone services to those who would use telephones to spread ideas about racial equality.
As I just mentioned to you above: maybe they did have such a right. (I’m not asserting that they did. I’m just saying maybe.)
You can’t say, “Why isn’t it okay to do this to B if it’s okay to do this to A?” without actually showing that it is okay to do this to A.
“At a minimum, you want to be able to tell advertisers that their ads will not appear on a platform with pornography.”
Sheesh. Do you actually USE Facebook? The idea that they’re curating to exclude porn is ludicrous.
I use Facebook, and while I stress I’m not looking for or trying to post porn, I know folks who have had pretty innocent things taken down because they show a bit too much skin. And I don’t see anything pornographic on FB.
Again, Brett makes up facts.
Or, Brett is not aware of what pornography is.
Hmmmm…. tough call.
Note that Brett admittedly does not use Facebook.
Where did I admit that? I tried dropping it, but it’s really the only practical way to keep up to date with what my relatives are doing back in Michigan. It’s just that it’s basically useless for anything political, that I have to do on MeWe.
Though it’s actually getting less useful even for non-political stuff as time goes by, they’re pushing so many scam ads my way.
See, that’s what happens when you don’t moderate, or moderate badly, or have decided, fuck the users, take the money.
Facebook, and X, are both examples of fairly atrocious moderation, though, but Facebook is notorious for erring on the side of blocking innocuous stuff as salacious. EXCEPT, as recently emerged, this appalling practice of ‘child-influencers,’ young kids whose parents post pictures of them in skimpy clothes and then collect revenue off the hits, while the comment sections are flooded with the ugliest, nastiest of comments, many from pedophiles. Mostly on Instagram, iirc, part of the Zuckerberg stable.
.
Prof. Volokh and Artie Ray observed that from varying perspectives.
That could explain Prof. Volokh’s lack of commentary on this one.
Artie Ray’s lack of commentary is, of course, explained by Prof. Volokh’s viewpoint-driven censorship.
So what is the difference between a state court enjoining Facebook from removing content from its platform, and enjoining a telephone company from blocking speech on its platform, in the context of freedom of speech?
I think you should keep posting this same question over and over again.
In your opinion, did telephone companies have a First Amendment right to cut off telephone calls if they did not like what was being said?
Probably, yes — with the caveat that if the government grants a telephone company a legal monopoly on telephone service, then you’re running into a much more state action-y situation, and the answer might be different. But it’s hard to believe that any telephone company would want to have that as its business model, so I doubt the issue will ever be tested.
When the telephone company transports a customer’s speech from one handset to another handset, the customer’s speech is bailment of the telephone company while the customer’s speech traverses the network. This bailment is not speech of the phone company. The phone company must transport such bailment without discrimination both by state common carriage law and also by federal telecommunications law.
The legal logic was worked out over 100 years ago even though the statute below first became part of the US federal code in 1934.
47 U.S. Code § 202 – Discriminations and preferences
(a) Charges, services, etc.
It shall be unlawful for any common carrier to make any unjust or unreasonable discrimination in charges, practices, classifications, regulations, facilities, or services for or in connection with like communication service, directly or indirectly, by any means or device, or to make or give any undue or unreasonable preference or advantage to any particular person, class of persons, or locality, or to subject any particular person, class of persons, or locality to any undue or unreasonable prejudice or disadvantage.
(b) Charges or services included
Charges or services, whenever referred to in this chapter, include charges for, or services in connection with, the use of common carrier lines of communication, whether derived from wire or radio facilities, in chain broadcasting or incidental to radio communication of any kind.
(c) Penalty
Any carrier who knowingly violates the provisions of this section shall forfeit to the United States the sum of $6,000 for each such offense and $300 for each and every day of the continuance of such offense.
I know a lot about the law and various other lawyerings. Now let’s say you and I go toe-to-toe on bird law and see who comes out the victor?
Title 47 is much more part of the life of an ordinary American than Bird Law is.
We are all aware of such laws. The Fifth Circuit cited this tradition of common carriage laws in their ruling.
The question here is if common carriage laws, to the extent they would require private entities to host, display, or transmit speech with which they disagree, violate the First Amendment.
There’s no difference. Exactly the same way there’s no difference between Person A talking to Person B on a telephone line, and Person A publishing a newspaper that Person B reads.
The newspaper publisher is not a common carrier in its newspaper publishing business.
The telephone company is according to current Title 47 a telecommunications common carrier.
Ilya’s analogizing Facebook to the NY Times in totally wrong. No Facebook user logs in thinking he is reading the letters to the editor or OpEd pages of Mark Zuckerberg’s private newspaper. The peer to peer interaction is what drives Facebook users. An adaptation of Common Carrier is an appropriate response to the problem.
A technical solution to the problem would be to open the moderation algorithm APIs to 3rd parties. The user could choose his own level and type of moderation.
Bill B
The user could choose his own level and type of moderation.
This doesn’t really work. Most users would keep the default. So now you’re just arguing over the default.
“A technical solution to the problem would be to open the moderation algorithm APIs to 3rd parties. ”
Beyond a minimum level of consensus moderation, this is actually the regime Section 230 anticipated:
“(b)Policy
It is the policy of the United States—
…
(3)to encourage the development of technologies which maximize user control over what information is received by individuals, families, and schools who use the Internet and other interactive computer services;
(4)to remove disincentives for the development and utilization of blocking and filtering technologies that empower parents to restrict their children’s access to objectionable or inappropriate online material;
…”
“(2)Civil liability
No provider or user of an interactive computer service shall be held liable on account of—
…
(B)any action taken to enable or make available to information content providers or others the technical means to restrict access to material described in paragraph (1)”
“(d)Obligations of interactive computer service
A provider of interactive computer service shall, at the time of entering an agreement with a customer for the provision of interactive computer service and in a manner deemed appropriate by the provider, notify such customer that parental control protections (such as computer hardware, software, or filtering services) are commercially available that may assist the customer in limiting access to material that is harmful to minors. Such notice shall identify, or provide the customer with access to information identifying, current providers of such protections.”
I used to use such a third party application with Facebook. Rather than pointing out where to find it, and facilitating its use, they churned their code to regularly break the application.
Why? When you business model is putting unrequested stuff in front of users’ eyes, the last thing you want is for the users to have any way to control what they see…
If allowed, ideally, users curate their own feed. They follow and friend who they like, block and report anyone who harasses them or posts unwelcome content on their feed. It’s the basic social media model. Around that there’s a lot of stuff the company has to watch out for, though, if they’re being responsible and prioritising their users.
This is not my area of expertise, so apologies in advance if this is not a cogent question.
One of the features of one of these laws was a requirement that the social media companies post their moderation criteria so that people can know the rules of the road before they post. How is that distinguishable from requirements for food companies to list ingredients, for public companies to file 10-Ks, or things like that? Why isn’t that just a consumer protection measure?
These would fall under the less-exacting Zauderer standard.
So the short answer is that … it depends.
Arguably, requiring platforms to publish information regarding their moderation standards (even if it includes some discretionary standards) would be fine.
Making platforms provide individualized and detailed explanations for every moderation decision, on the other hand, would likely be impermissible.
In short- you can require disclosure of standard, but not really require a specific process or explanation for specific decisions.
Probably.
There was an interesting conversation in the arguments yesterday about how the states were using the disclosure requirements as another way to influence the editorial policies. Texas admitted as much in its brief.
Right. But IIRC, they were specifically discussing the disclosure requirement regarding individualized and detailed explanations for every decision. Which would likely run afoul of the first amendment (and the burdensome nature of it was a feature, not a bug, of the legislation).
But requiring disclosure of just the standards would be fine, even if it was just the general things that would get you moderated.
The problem is that the social media companies do include a list of some stuff that’s forbidden in their Terms of Service, but then they expressly say, “Or anything else we decide in our sole discretion we don’t like” or something along those lines. Either that’s acceptable under these ‘disclosure’ laws — in which case they do nothing — or it’s not, in which case, what?
Does any creative-enough sociopath get a free week’s or month’s worth of posts of something sociopathic because Facebook or Twitter never thought of that thing and so didn’t specify it? They can change their rules after it’s brought to their attention, but presumably not retroactively or the disclosure is again not doing any work.
Isn’t this the issue Prof. Volokh was dealing with (or still is, I think the state appaled) in New York?
https://www.thefire.org/news/broad-coalition-supports-fires-challenge-new-yorks-online-hate-speech-law-second-circuit
Tangentially related, a federal judge in LA dismissed a federal anti-riot prosecution against right-wing protestors because left-wing protestors at the same protest did the same or worse, but were not prosecuted. Which in his view is viewpoint discrimination.
Here is a summary from the opinion itself:
The full opinion is here:
https://storage.courtlistener.com/recap/gov.uscourts.cacd.728039/gov.uscourts.cacd.728039.334.0.pdf?utm_source=substack&utm_medium=email
Apparently the Ninth Circuit has ordered the Defendants arrested pending the appeal.
This one will certainly be interesting.
Has anyone not named Yick Wo ever successfully asserted a selective prosecution claim? It does not appear from the opinion that this Bush II appointee conducted an evidentiary hearing on the defendants’ claim.
United States v. Armstrong, 517 U.S. 456, 116 S. Ct. 1480 (1996), explains how to make a selective prosecution defense effective. As far as I know, no defense has followed the recipe.
FWIW, Armstrong states that a selective prosecution claim is not a defense on the merits to the criminal charge itself. 517 U.S. 456, 463 (1996).
SCOTUS tells us exactly what evidence is to be used for a defense of selective prosecution.
That’s with regards to discovery. Selective prosecution is still a basis to dismiss a case, at least in theory (although in practice it has a chance of a snowball in you-know-where.)
What makes this case more interesting is that the non-prosecuted parties were at the very same riot. That’s different from the typical selective prosecution case.
As for an evidentiary hearing, that is generally only required where there are disputed issues of fact. I would have to study the record to see if that is the case.
Now apart from the legal end, as citizens, we are entitled to question our government. As presented in the opinion, two ends of the political spectrum showed up somewhere and rioted. One side was prosecuted, one was not, even though both sides engaged in criminal behavior. Perhaps there is an explanation, but so far I have not heard one. To quote Ricky Ricardo, “you got some splaining to do.”
The federal First Amendment was never intended to apply to the States.
To have a small unelected group known as the federal judiciary in control, to a large degree, of the majority of government policies, including state and local governments, is really not much like any sort of “self-government.”
It would have been unthinkable to every American founder. It is largely alien and contrary to American legal and political thought prior to around the middle of the last century.
1) I wonder if you apply that logic to the Second Amendment. Do you think states should be free to ban whatever arms they want?
2) Your claim that it was “never” intended to apply to the states is wrong. It’s possible that the 1A was not intended to apply to the states in 1791. But it was intended to apply to the states in 1868, when the 14A was adopted to do just that.
1) Yes. 2) Disagree.
“It’s possible that the 1A was not intended to apply to the states in 1791.”
Possible, eh? Subtle tells of the most odious propagandist on this website.
How much of your worldview is based on wishing it was 1850?
The per capita federal spending you want, the nullity of an Natural Born Citizen clause, the lack of incorporation.
Possible, yeah. After all, it wasn’t until 1833 that SCOTUS ruled that the Bill of Rights didn’t apply to the states.
State regulation at the end-user point of presence by a communications common carrier was always explicit and implicit in the FCC framework that Title 47 created. The Telecommunications Act of 1996 did not change this two-tier framework but only made explicit communications common carriage that was not regulated by the FCC. Thus the Telecommunications Act of 1996 introduced the concepts of telecommunications service [47 U.S. Code § 153 – Definitions (53) Telecommunications service] and of information service [47 U.S. Code § 153 – Definitions (24) Information service]. Nothing in the legislative record indicates that Congress intended to exempt a social medium platform, which is an obvious communications common carrier, from 47 U.S. Code § 202 – Discriminations and preferences.
As far as I can tell, state regulators continue to use the traditional definition of a telegraph, and no federal statute has overridden this definition: “A telegraph service is a service that transmits a message electrically by wire or by wireless means.” See Easylink Servs. Int’l, Inc. v. State Tax Appeals Tribunal, 101 A.D.3d 1180, 955 N.Y.S.2d 271, 2012 N.Y. Slip Op. 8366 (N.Y. App. Div. 2012). The operative rule or regulation is N.Y. Comp. Codes R. & Regs. tit. 20 § 527.2 (d). A social medium platform meets the definition of a telegraph service in NY and probably in every other state.
Since the start of US telegraph service in 1845, a state has been regulating what a communications common carrier must and must not transport. See attached article. If someone believes there is much difference between the original telegraph service and the service, which a social medium platform provides, he must believe a social medium platform must operate by magic.
The Texas and Florida laws, which regulate a social medium platform, are completely in line with the legal history of communications common carriage.
A lot of law must be clarified by full litigation. I hope the Supreme Court will remand both cases back to the trial court and order the Texas and Florida laws to be put into effect until Netchoice can show why they violate Title 47 or the US Constitution.
The Telecommunications Act of 1996 was intended (a) to clarify that the FCC did not regulate a service like (USPS) E-Com and (b) to get the FCC out of regulation of the telegraph system, which was almost obsolete by 1996. The actual telegraph system came under the traditional generic definition of a telegraph system, but telegraph messages were usually stored at intermediate switching centers until a communications path became free. In contrast, Telex service was like telegraph service but used a demand-dialed switched network. Both services are communications common carriage service, but telex is a telecommunications common carriage service that the FCC continued to regulate even after it stopped regulation of telegraph service.
I am curious why you continue to post this. It’s wrong, and nobody reads it or cares, especially the courts. Remember how your lawsuit based on this theory was so bad it was dismissed without the defendants even needing to answer it?
I used to work with attorneys that worked in the AT&T legal department and that knew Title 47 backwards and forwards. Why should I trust David Nieporent over those attorneys? David Nieporent used to deny “common carrier of messages” was even terminology of law.
I also trust Blackstone more than I trust David Nieporent.
Blackstone tells us that common carriage law is founded on assumpsit.
Did you even read that Blackstone thing you quoted? It never talks about “non-discriminatory.” But even more importantly, by its definitions, social media is definitely not a common carrier because its business isn’t carriage at all, it’s advertising.
Can you read and comprehend a simple paragraph?
Blackstone addresses all the common professions and was not addressing a common carrier specifically.
A reasonable person would have realized that an obvious transformation must be made to Blackstone’s last sentence so that the last sentence would apply to a common carrier of messages.
“Also, if a telegraph company, or other electrical carrier of messages by wire or by radio, hangs out a sign and offers a public interface for message transport, it is an implied engagement to transport a message for all persons who need such transport; and upon this universal assumpsit an action on the case will lie against the carrier for damages, if the carrier without good reason refuses to transport a message.” Duh!
The business model is irrelevant. If the business offers a service of carriage to the public under standard terms for a fee, the service is common carriage. Duh!
In the case of social media, there is no fee.
The point of Blackstone is, the “common” in terms like “common carrier” doesn’t refer to public accomodation, it refers to the guild-like quality of certain professions back in the day that had “common” skills and associated warranties. If you hired a “common carpenter” you’d have a claim against him in case of malpractice. If you hired a random unskilled laborer, you wouldn’t, barring a “special agreement” i.e. custom contract.
A “common carrier” therefore is a business of carriage that operates in the common fashion with typical terms. Nothing at all like social media.
Blackstone lists professions that are probably as far from guild professions as a profession can be.
Smith, taylor, carpenter, ferrier… what are you talking about?
A fee can be a one of the set that consists of money, barter, and work.
A social medium platform hardly provides communications common carriage for free, and often all three types of fees are levied.
I can tell you know that makes no sense. Your heart’s not in it.
That is not, and has never been, the law in the United States, unlike in England.
US telecommunications regulatory authority over communications common carriage or over telecommunications common carriage comes from the Commerce Clause and not from common law, but state regulatory authority over common carriage comes from the state law that existed at the time of ratification of the Constitution (9th Amendment). This state law is often the British colonial law that the states enforced. The right to sue for libel also seems to contradict the 1st Amendment protection from abridgment of speech but is guaranteed by the 9th Amendment.
The Ninth Amendment does not say whatever you bizarrely think it says.
What does David Nieporent believe the Ninth Amendment means?
David Nieporent believes it means exactly what it says: that one should not assume that because something isn’t expressly listed in the Bill of Rights that it does not exist. It does not itself protect any rights at all. It does not say, to an actual native speaker of English, which i realize you are not, “Anything that was not banned at the time the Ninth Amendment was adopted constitutes a legally-enforceable right that must continue to exist in perpetuity.”
Common carriage law has a longstanding history of continuous application in the states. This common carriage law was extended to many new technologies including digital electrical transport of a message. SCOTUS extended state communications common carriage law to interstate commerce by means of the ruling in Western Union Tel. Co. v. Call Pub. Co., 181 U.S. 92, 21 S. Ct. 561 (1901). The Mann Elkins Act and the Telecommunications Act of 1934 followed.
Most of the state laws from the middle 19th century remain in force and apply to common carriage of a message electrically by wire or by radio. These laws supplement 47 U.S. Code § 202 – Discriminations and preferences. The right to non-discriminatory common carriage of messages existed at the time of the ratification of the Constitution, and this right continues to be recognized to this day.
A social medium platform fits the traditional definition of a telegraph service. No legal basis exists to fail to enforce against a social medium platform either a federal law or a state law that forbids discrimination by a communications common carrier of a message.
The Complaint was dismissed for other reasons, and I have the right to refile. I intend to refile within a month or so. I did not point out in my original Complaint how most maybe all states define a telegraph and how this common definition applies to a social medium platform. A statute without a sunset provision continues in force until repealed (or declared unconstitutional).
You have the right to refile in the same way you have the right to drive over to the White House, push your way past the Secret Service, march into the Oval Office, and demand to speak to Joe Biden about your antisemitism, which is to say you do not have any such right at all. Your case was dismissed with prejudice. That means no refiling.
My case was was not dismissed with prejudice as the counsel for A Medium Corp agreed. The statutory ability to dismiss specifically only provided for dismissal without prejudice.
It was. Read the opinion, and then read Fed. R. Civ. P. 41.
The complaint was dismissed by Judicial Discretion and an erroneous determination by the Judge of an absence of valid monetary claim. Fed. R. Civ. P. 41 does not apply.
Is it a case filed in federal court? Is it a civil case? Was it dismissed? Then Fed. R. Civ. P. 41 applies.
The complaint was dismissed for failure to state a claim. The remedy if one thinks that the judge’s decision was “erroneous” is to move for reconsideration, and if that fails, to appeal. Which you know, because you did, and lost. The remedy is not to re-file the same claim and call for a do-over.
I live next to Harvard. Jewish Harvard students agree with me on the genocide that the baby killer nation and Joe Biden are committing in stolen Palestine.
I am a Jew. A Zionist is not a Jew. A Zionist is post Judaism because Zionism murdered Judaism by transforming Judaism into a program of genocide.
Among students, Jews and anti-Zionists vastly outnumber post-Judaism Zionists. The administration is aware of the situation but wants to keep Zionist contributors.
The situation is fundamentally chaotic
1. because there is no middle ground between genocide and anti-genocide and
2. because the acceptance of money from a Zionist like Ackman may expose the university to criminal liability pursuant to 18 U.S. Code § 2339A – Providing material support to terrorists.
§ 2339A directly references 18 U.S. Code § 1091 – Genocide.
You aren’t a Jew.
I wasted years in yeshiva, mesivta, and kollel because Zionism murdered Judaism by transforming Judaism into a program of genocide. I am a Jew, but a Zionist is not.
Jewish Law: A Zionist Cannot Be a Genuine Member of the Community of Israel — A Zionist is Post-Judaism!
The Sages of the Talmud tell us that no Zionist can be a genuine member of the community of Israel. A Rabbinic scholar like RAMBAM affirms this position. A Zionist is at most technically Jewish but cannot ever be a genuine member of the community of Israel because he is proud of genocide and theft. In the Zionist case, genocide includes theft of identity because Palestinians are descendants of Greco-Roman Judeans and of other peoples of Greco-Roman Palestine while no modern Jew is a descendant of a Greco-Roman Judean. Here is Babylonian Talmud Nedarim 20a (12).
§ It is taught in a baraita: “That His fear may be upon your faces” (Exodus 20:17); this is referring to shame, as shame causes one to blush. “That you not sin” (Exodus 20:17) teaches that shame leads to fear of sin. From here the Sages said: It is a good sign in a person that he is one who experiences shame. Others say: Any person who experiences shame will not quickly sin, and conversely, one who does not have the capacity to be shamefaced, it is known that his forefathers did not stand at Mount Sinai.
The closest you’ve ever been to a yeshiva is throwing rocks through the windows.
There is a precisely 0% chance of that happening.
I listened to the oral argument and really cannot guess how SCOTUS will rule.
The Justices were taken aback when Netchoice asserted that Gmail had a first Amendment to discriminate against a user.
Before Kagan was appointed to SCOTUS, I discussed with her how Section 230 caselaw could be used to negate Title II of the 1964 CRA. She seems to have an ongoing concern about such a possibility.
Thomas, Gorsuch, Alito, Kagan, Coney Barrett, and Sotomayor might support remand. The AT&T legal team, of which I was a member, could have persuaded Roberts, Kavanaugh, and Brown Jackson that Manhattan Community Access Corp. v. Halleck, 139 S. Ct. 1921 (2019), is irrelevant, but I heard no coherent argument on this point.
The Justices would probably like to review a complete record from a trial court and a court of appeals. SCOTUS could remand but keep the injunction in place with respect to penalties.
I am not a fan either of the Florida statue or of the Texas statute, but neither statute is completely unprecedented by the history of state regulation of a common carrier of messages.
The previous message was garbled. The comment system failed to accept my edits.
Sotomayor seemed to accept the Netchoice arguments even though the 1st Amendment is mostly irrelevant for a common carrier of messages.
Perhaps Kagan might change Sotomayor’s mind, but Sotomayor seemed adamant.
I don’t think I buy the distinction between email, which the justices nearly all seemed to think could be made non-discriminatory, and a Facebook news feed, which the Justices seemed to think couldn’t be.
The argument is that Facebook “curates” its newsfeed, thereby acquiring editorial rights.
Don’t most email browsers do the same? Microsoft Outlook has separate “focused” and “other” tabs and a “junk” folder, and it largely gets to decide which emails go in which. Isn’t that exactly curating? It’s selecting from the universe of possible incoming emails to determine which ones you should be focused on. Isn’t that just what Facebook does with posts?
It seems to me that Facebook could meet a non-discrimination requirement fairly straightforwardly, simply by doing what Microsoft does with emails. It could have a “focused” newsfeed, an “other” newsfeed, and a “junk” newsfeed, and provide search tools. That way, no incoming post directed at you is completely intercepted, you can look at one of the other feeds and search through thousands of them if you want. The “focused” newsfeed could simply be presented more prominently. Most people, most of the time, probably wouldn’t bother looking at the other feeds, just as most people, most of the time, don’t look at their “other” and “junk” email.
It seems to me that if Facebook simply does what email does, it could comply with a requirement not to outright censor or delete anybody without losing its ability to prioritize and “curate.”
More fundamentally, because email also curates, it strikes me that this purported distinction from email doesn’t really hold.
Everybody curates their own feed, that’s the whole point of social media. The company has general oversight of what the users do. It’s more complicated than that, obviously, but that’s the basic set-up.
Here’s the thing: FB doesn’t curate your feed for you, they curate your feed for THEM. Most of the people I know who use facebook would rather that they cut it out and let you just see the people you chose to follow, and that’s it.
That’s largely how it worked while it was growing, they didn’t aggressively curate people’s feeds, against their will, until they had enough market power that they didn’t have to worry about driving off customers.
Why can’t Microsoft Outlook just curate your email inbox for their own benefit too? Would that be all they ‘d need to do to get First Amendment rights to decide what emails to delete, what to move to the top, and what users to kick out as they see fit? It doesn’t sound so hard.
If doing the users favors and giving them some control over their own experience is what causes software giants to lose all their rights to control things themselves, why would anybody want to do their users any favors or let them have any control? The constitution would seem to highly disincentivise such an approach to ones users.
‘why would anybody want to do their users any favors or let them have any control?’
Because that’s what users want, and if you don’t understand that youre not going to get very far? It’s long after a site has established itself by giving users what they want that they start to degrade that service in favour of more ways to drive revenue chasing bigger profits while the site stagnates and users drift away. Of course Facebook also does the Fox News thing of keeping a constant stream of right wing crap directed at its users, meaning there’s always that Boomer base while everyone else stays there because it’s the only way of maintaining connections with some people.
FB and social media in general does curate its feed when it adds content to a particular message providing additional context. In this way it is a publisher.
If they change their policies not from when they were growing, it is an indication of them thinking that they have power over the marketplace, which is a completely different issue.
For the most part,
*
email services keep all your email until you delete it.Social media curation is a little bit different. It represents a constantly shifting “window” into the universe of content.
You could build an email system that looked more like social media, in which gmail just chose the top 10 emails it thought you might want to see at any given time, or whatever, with no delete. If you did that, Steve Bannon would start screaming about gmail suppressing conservative emails, Texas would suddenly discover that its law covered gmail, and Google would say that its email curation algorithm was speech.
*
Except junk mail. It’s sort of surprising the above hasn’t already happened with junk mail. All those Trump fundraising grifts going straight into the trash must be pissing someone off in a way that could be blamed on Silicon Valley.I don’t see a difference. Facebook does the same thing. It stores everything in a database too. You can search both your posts and your feed and see posts from years ago. It sometimes suggests reposting old posts as “Facebook memories.”
It’s really no different. Just a different interface.
Facebook spent its first years simply listing all the posts in order, just like email. For some years after that, it gave you a choice of its curation or just seeing everything in order. Curation isn’t actually a fundamental property of Facebook. It’s a relatively recent feature. It’s not part of the fundamental system. The underlying system works much like email.
ReaderY writes something obvious to anyone that has ever looked under the hood of an Internet email system or of an Internet social medium platform.
“Facebook spent its first years simply listing all the posts in order, just like email. For some years after that, it gave you a choice of its curation or just seeing everything in order. Curation isn’t actually a fundamental property of Facebook. It’s a relatively recent feature. It’s not part of the fundamental system. The underlying system works much like email.”
I honestly can’t tell if you are this … technologically illiterate.
Simplified-
I mean, this is before we even get into underlying architectures (you understand that “email” are messages sent by a specific protocol, and accessed using another protocol). Whereas facebook was (originally) a MySQL database that dynamically created individual web pages based upon the person accessing them and (while it is much more advanced than this now) is conceptually similar, whether you’re accessing it via the web or an app, and that the dynamic creation has always entailed curation; heck, there was curation (editing, etc.) both in the input and the display?
Oh, never mind. Whatevs.
From a legal standpoint of analyzing communications common carriage, the underlying protocols are almost entirely irrelevant.
A judge, legislator, president, or lawyer is rarely an engineer that is versed in the technological details.
Gmail and Facebook both provide communications common carriage for messages except perhaps for communications common carriage for packetized voice or packetized video, whose legal status has never been clarified.
Frontend software, which Gmail or Facebook provides to run on a computing device of a user, is a specialized interface to a backend database (except perhaps in the case of the audio and visual services, which I have never analyzed).
“Frontend software, which Gmail or Facebook provides to run on a computing device of a user, is a specialized interface to a backend database (except perhaps in the case of the audio and visual services, which I have never analyzed).”
So, just out of curiosity …. do you have a smartphone? You know, like an iphone?
Do you have it set up to get your gmail there?
Cool. How do you think that works?
Now, can you do the same with facebook? Why or why not?
This has been another in a series of stupid rhetorical questions. I don’t even know why I bother.
In the case both of Gmail and also of Facebook, I use frontend software that either Gmail or Facebook provides. What is loki13 trying to express?
I am trying to express that you aren’t particularly paying attention to actual technology.
Email uses a specific protocol (both in the transfer and access of messages). This is why you can use a multitude of different clients to access your email. For example, you can use any number of email clients to “display” your email. You an access you gmail through … whatever. And it will display your email … however.
Messages are messages.
On the other hand, you can’t do that with facebook. The display (whether it’s through the dynamically-created webpage, or through the app) is generated at the time. You cannot just use a third-party client to “get” the data, because it’s not in a stable form like an email message. Instead, it’s always created dynamically (“curated”) from the content at facebook.
Now, this has a notable difference w/r/t to the First Amendment issue at play here. Not so much for the Section 230 issues that other people are discussing. But these fine distinctions actually matter.
I can’t keep re-stating these obvious things; either you understand these basic principles (in which case I shouldn’t have to explain this) or you don’t.
While Gmail’s interface is more open than Facebook’s interface, the openness is irrelevant to determining whether Gmail or Facebook provides communications common carriage of a message.
In all cases, the message is transported either via HTTP POST or via HTTP PUT to backend software that puts the message in a backend database, and the message, which is property of the originating user, becomes bailment either of Gmail or of Facebook.
At some point in time, an indication of a new message in a backend database is provided to a user. This user’s frontend software makes an HTTP GET REQUEST to get the message, and the message is transported to this user via an HTTP RESPONSE,
Either Gmail service or Facebook service of communications common carriage of a message performs the communications common carriage of the message.
The described scenario is a standard scenario of common carriage of a message either in the case of Gmail or in the case of Facebook.
“In all cases, the message is transported either via HTTP POST or via HTTP PUT”
No, it isn’t.
Again, we can discuss the finer points of SMTP, IMAP, POP, etc.
But you literally do not understand what you’re talking about.
Yes, Gmail has a web interface (as do other mail services) but you can get your email messages through any number of different clients using default protocols.
Seriously, you don’t have a smartphone with a mail app? Or a computer? Is this some kind of gibberish to you?
Weirdly, that’s the exact opposite of the position you have taken during your litigation ‘career,’ where you’ve argued that everyone on the planet except you misunderstands § 230 because only you have the technical background to understand how the Internet works.
I explained Internet technology by low tech examples. Such explanation is probably a correct approach.
I provided careful grammatical and logical analysis of relevant statutes. The approach is probably incorrect.
I have developed a gentle approach to informing a judge of correct grammatical and logical analysis. Maybe this approach will be more successful.
Um, it has.
test
Frontend software can access the backend Gmail database through imap.google.com, pop.google.com, or mail.google.com. From the standpoint of communications common carriage law, there is no difference.
Okay, but is there a difference otherwise?
WHY are they accessing?
All during this argument the left shrieks that these laws violate the platforms right to editorialize–
When a private individual or private entity makes decisions about what to include and what to exclude, that’s protected generally editorial discretion, even though you could view the private entity’s decision to exclude something as “private censorship.”
But that’s the point.
To get the protections, they’re not supposed to editorialize.
To be relieved of liability concerns about what’s posted on your site you have to not curate that site beyond dealing with actual illegality.
And they’re not abiding by that.
That is, of course, 100% wrong. You utterly misunderstand Section 230. It was not a tradeoff, and there was nothing in the law that said or even hinted that “they’re not supposed to editorialize.” That’s just some dumbass idea Trumpkins came up with in recent years because they’re all semiliterate at best.
Sorry, Az is right and you are wrong.
[And when you say ‘100%’ you lose all ‘cred’ ]
Did you consider that your own phrasing is an argument against you Are the semiliterate (and any mentally deficient) types to be free of all semantic deception,
Bet you lost bowel control when Biden asked for Disinformation Governance Board— doing exactly what you supposedly hate.
Sorry,but I am not a Biden or Trump groupie as you obviously are.
Just intelligent and schooled in argumentation.