The Volokh Conspiracy
Mostly law professors | Sometimes contrarian | Often libertarian | Always independent
Is silencing a few million Americans a form of protected speech?
Episode 474 of the Cyberlaw Podcast
The Supreme Court has granted certiorari to review two big state laws trying to impose limits on social media censorship (or "curation," if you prefer) of platform content. Paul Stephan and I spar over the right outcome, and the likely vote count, in the two cases. One surprise: we both think that the platforms' claim of a first amendment "right to curate" is in tension with their claim that they, uniquely among speakers, should have an immunity for that form of speech.
Maury weighs in to note that the EU is now gearing up to bring social media to heel on the "disinformation" front. That fight will be ugly for Big Tech, he points out, because Europe doesn't care if it puts social media out of business, since it's an American industry. I point out that elites all across the globe have rallied to meet and defeat social media's challenge to their agenda-setting and reality-defining authority. India is aggressively doing the same.
Paul covers another big story in law and technology: The FTC has sued Amazon for antitrust violations – essentially price gouging and tying. Whether the conduct alleged in the complaint is even a bad thing will depend on the facts found by the court, so the case will be hard fought. And, given the FTC's track record, no one should be betting against Amazon.
Nick Weaver explains the dynamic behind the massive MGM and Caesars hacks. As with so many globalized industries, the ransomware supply chain now has Americans in marketing (or social engineering, if you prefer) and foreign technology suppliers. Nick thinks it's time to OFAC 'em all.
Maury explains the latest bulk intercept decision from the European Court of Human Rights. The UK has lost again, but it's not clear how much difference that will make. The ruling says that non-Brits can sue the UK over bulk interception, but the court has already made clear that, with a few legislative tweaks, bulk interception is legal under the European human rights convention.
More bad news for 230 maximalists: it turns out that Facebook can be sued for allowing advertisers to target ads based on age and gender. The platform lost its immunity because it facilitated advertiser's allegedly discriminatory targeting.
The UK competition authorities are seeking greater access to AI's inner workings to assess risks, but Maury Shenk is sure this is part of a light touch on AI regulation that is meant to make the UK a safe European harbor for AI companies.
In a few quick hits and updates:
- I explain the splintered PCLOB report, which endorses 702 renewal, with widely diverging proposals for reform.
- Paul tells us that the Biden Administration plans to bring back "net neutrality" rules. Hey, if we're going to start reviving hits from 2010, can we bring back Ke$ha's TikTok instead?
- I flag an issue likely to spark a surprisingly bitter clash between the administration and cloud providers – Know Your Customer rules. The government thinks it's irresponsible from a cybersecurity point of view to let randos spin up virtual machines. The industry doesn't think the market will tolerate any other way of doing business.
- Speaking of government-industry clashes, it looks like Apple is caught between (a) Chinese demands that it impose tough new controls on the apps it makes available in its app store and (b) human decency. Maury has the story. And I've got a solution. Apple should just rebrand the totalitarian new controls it must adopt as "app curation." Seems to be working for everyone else.
You can subscribe to The Cyberlaw Podcast using iTunes, Google Play, Spotify, Pocket Casts, or our RSS feed. As always, The Cyberlaw Podcast is open to feedback. Be sure to engage with @stewartbaker on Twitter. Send your questions, comments, and suggestions for topics or interviewees to CyberlawPodcast@gmail.com. Remember: If your suggested guest appears on the show, we will send you a highly coveted Cyberlaw Podcast mug! The views expressed in this podcast are those of the speakers and do not reflect the opinions of their institutions, clients, friends, families, or pets.
Editor's Note: We invite comments and request that they be civil and on-topic. We do not moderate or assume any responsibility for comments, which are owned by the readers who post them. Comments do not represent the views of Reason.com or Reason Foundation. We reserve the right to delete any comment for any reason at any time. Comments may only be edited within 5 minutes of posting. Report abuses.
Please
to post comments
How do Mr. Baker, Prof. Stephan, and the other Federalist Societeers evaluate the Volokh Conspiracy's partisan, viewpoint-driven censorship in this context?
There is a big difference between social media platforms that provide technical facilities for groups and forums, and user-operator group facilitators who use those facilities to post and communicate.
A telephone company is different from folks who set up a conference call. The law gives the latter but not the former choice in who they want on the call and what the contents of the call should be. The Volokh Conspiracy is more like a conference call than a telephone company.
Facebook is also more like a conference call than a telephone company.
"One surprise: we both think that the platforms' claim of a first amendment "right to curate" is in tension with their claim that they, uniquely among speakers, should have an immunity for that form of speech."
I would say direct conflict rather than tension.
There's no conflict. Congress can say that, e.g., SUVs are categorized as cars for safety regulation but as trucks for gas mileage regulation. (This is just a hypothetical.) One might say, "Why shouldn't they be treated as cars for both, or trucks for both?", but the answer is, "Because that's what Congress decided." It's not "trying to have it both ways" or any cliché like that, and there's no conflict. Different legal regimes may apply to different situations.
One may object that the above described hypothetical regulatory scheme shouldn't be that way, but that's just a policy question. It's not an issue of logic or anything like that.
Especially since, in the Section 230, it was both Congress's intent and the plain language of the law, that the platforms be immunized from liability specifically for making curation decisions.
There are legitimate disagreements over the extent of that immunization and/or the types of takedowns Congress intended, but the idea that because Congress granted immunity therefore the platforms must carry all speech demonstrates a complete failure to understand what's going on with Section 230.
Making certain sorts of curation decisions. Never forget that Section 230 specifies what sort of basis for moderation is permissible.
"2)Civil liability
No provider or user of an interactive computer service shall be held liable on account of—
(A)any action voluntarily taken in good faith to restrict access to or availability of material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable, whether or not such material is constitutionally protected; or
(B)any action taken to enable or make available to information content providers or others the technical means to restrict access to material described in paragraph (1).[1]"
Such listings with catchall terms generally require that the stuff swept up under the catchall be similar in kind to the enumerated terms. And when you say "in good faith", you're acknowledging that it's possible to show bad faith.
Again, that's not what the word "otherwise" means. By definition, it means in ways different than the things already enumerated. Your argument might be plausible if the word "otherwise" had been omitted.
I’m not pulling this out of my ass, and I expect you know that. Here's a discusion of the principle of ejusdem Generis in the context of Section 230. You may recognize one of the authors.
Section 230(c)(2) immunizes platforms’ decisions to block material that they “consider[] to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable.” The ejusdem generis interpretive canon suggests that “otherwise objectionable” should be read “to embrace only objects similar in nature to those objects enumerated by the preceding specific words.” In this instance, the similarity is that all those words refer to material that was traditionally viewed as regulable in electronic communications media—and was indeed regulated by the Communications Decency Act of 1996, as part of which § 230 was enacted. And restrictions on speech on “the basis of its political or religious content” were not viewed as generally permissible, even in electronic communications.”
Yet Truth Social lives on.
No bait and switch in Truth Social's case, because they never started out uncensored. I started signing up, got halfway through the TOS, and canceled.
But, yeah, I don't think Section 230 protects Truth Social's moderation policies. "You knew it when you signed on" might, though.
Section 230 absolutely protects Truth Social or they'd be out of business.
All the social media companies have a TOS that describes their moderation policies.
I don't think it protects them as written. As interpreted? Sure.
I think the interpretation is wrong. So does Volokh.
I'm not well versed in the Unruh Civil Rights Act, nor anti-discrimination law in general, so I acknowledge that I may well be off by a country mile here. But it strikes me as a pretty weak basis for defining targeted advertising as a form of discrimination that is prohibited by law, which seems to be predicated on preventing denial of service on the basis of any of the protected statuses. Targeted advertising isn't about denying anyone service, and I'm sure most/all of the companies in question would be more than happy to take anyone's money in exchange for whatever products/services they offer. TA is about making the most effective use of advertising resources (or at least what the companies think is their most effective use), not denying sales/service to anyone.
My reading of the case is that a middle aged woman went looking on Facebook for a certain type of insurance and couldn't find it because Facebook's algorithm only made those ads available to demographic groups she wasn't in. So it's not that the insurance companies wouldn't take her money; it's that Facebook made it more difficult for her to find those companies even when she made a point of looking for it.
So, assume that almost no members of a certain demographic group are going to be interested in X product. Are advertisers (and Facebook) allowed to make an eminently reasonable business decision not to waste advertising money on people who probably aren't going to buy their product (to the detriment of other advertisers whose product that demographic group will buy). Or, must they ignore economic realities in the interest of equal targeting of the handful of people from that group who are interested in the product?
There is a business necessity exception to discrimination laws, but the courts have construed it extremely narrowly. Unless it's the only way you can stay open, courts probably aren't going to say that it's a business "necessity."
The opinion briefly states on page 12 that Facebook "admits" that its ads are a "service" and so are covered under the state antidiscrimination law. I can't imagine why Facebook would stipulate to this because it seems nonsensical and a cornerstone of the woman's case.
The opinion briefly states on page 12 that Facebook “admits” that its ads are a “service” and so are covered under the state antidiscrimination law. I can’t imagine why Facebook would stipulate to this because it seems nonsensical and a cornerstone of the woman’s case.
I assume you’re referring to this:
“And it does not dispute women and older people were categorically excluded from receiving various insurance ads – an admitted service of Facebook – on its platform.”
Unfortunately I can’t find what was actually “admitted” by FB with regard to ads being a “service”. Paid advertisements are certainly a service provided to those doing the advertising, but characterizing the ads as a “service” to those targeted by the ads seems to me a real stretch. I suppose they may well have referred to ads as such in some context as an attempt to put a positive spin on what to most people are annoyances that are endured as a cost of being able to use the platform’s actual “services”. Similarly, search engines like Google serve up ad content as a by-product of using their actual end-user services, but most people don’t regard those ads as a “service”, and quite commonly employ mechanisms like ad blocker plug-ins (and other means) to suppress them.
Hell, even though Super Bowl halftime commercials have actually become an entertainment tradition I wouldn’t regard them as a “service” to the viewer either.
I was surprised myself that Facebook did not dispute this issue more vigorously. Targetted advertising is different from the Rommates context the court accepted as an on-point comparator. People who go onto a roommates-focused web site seeking roommates are clearly users of the site whether they post advertisements or read them. But it is much less clear that recipients of targetted ads are users of an ad targeting service in anything like the same way.
Yeah, the idea that anyone would "go looking for" specific ads on a social media platform made me blink once or twice. The whole thing seems mildly contrived.
Yeah, but people actually DO use FB for shopping. I know my wife has, and has gotten burned by it on some occasions. (Lots of scammers selling on FB.)
Actively "shopping" by searching FB Marketplace or groups that cater to specific interests for items/services is very different from looking for specific advertisements that are served up dynamically.
I'm not so sure it's that categorically different.
I’m not so sure it’s that categorically different.
It most definitely is. Dynamic ad content is not intended or designed to be searched for or otherwise acquired on-demand. Posts on FB Marketplace, special interest groups, et al offering specific products/services for sale ARE intended to be found and read on-demand. In this context that's an extremely significant difference.
"Dynamic ad content is not intended or designed to be searched for or otherwise acquired on-demand."
You think you could search FB for life insurance, and not get served up ads for life insurance?
There is a difference between only advertising in Ebony magazine (does that still exist?) which means that most white people won't see your ads, and saying, "Oh, you're white? I won't let you see my ad, period."
Agreed, but the nature of Internet advertising is significantly different from the nature of print media advertising.
Krychek_2, I disagree. If they were different in nature then internet advertising would presumably not be cleaning the competitive clocks of print media publishers and broadcast publishers. Each kind would have its own niche according to its own nature.
Experience has been to the contrary. That is pretty good market-based proof that in the ways which count most, internet advertising and traditional media advertising work alike. There are differences in the processes used, but not in the critical role advertising plays to enable publishing business models to succeed.
An important nexus which connects them is that both kinds of publishing activity require audience curation to succeed. It is on that basis especially that press freedom protection for content choices by publishers can best be justified. For most publishers, content choice is the only tool they have to curate audience, and to curate audience is indispensable to most publishing business models.
Of course, the internet introduced a new kind of curation, and an adjusted business model, based on individualized surveillance of audience members, with content delivery choices based on surveillance results for each audience member. That power is unavailable to traditional media, which must curate audiences en masse, via more generalized adjustment to either a mix of content, or more narrowly-tailored content, according to the business model of the publisher.
Individualized content targeting, combined with Section 230, laid the foundation for the internet publishing giantism which has vexed so many, in so many different ways. I insist that it has become an urgent question whether either or both of those factors should be targeted for legal review.
As I have repeated here numerous times, the only safe harbor ever found for press freedom has been public policy to encourage diversity and profusion among a myriad of private publishers. Whatever it was that Congress intended, combination of Section 230 with surveillance-targeted audience curation delivered the opposite of that policy—with a widely recognized result that broader expressive freedom has been jeopardized. Many have recognized that. Many are unhappy about it.
Unfortunately, for political reasons, it appears that reconsideration of either policy question—surveillance-based audience curation, or Section 230—must await a sea change in presently-unrealistic public ambitions for what internet publishing can deliver. People I refer to as internet utopians continue to insist that the internet deliver to each person with access to a keyboard a publishing power greater than any which any publisher on earth has ever enjoyed, or ever could enjoy.
Internet utopians want to be able to publish anonymously, world-wide, without cost, anything they choose to say, without regard for liability, truth, or even fraudulent intent, and with neither prior editing by private publishers, or post-publication take-downs on the internet. If realized, that ambition would preclude every kind of audience curation, and thus rule out most presently-successful publishing business models.
To do that would limit practical means to accomplish publishing to only those which could be supported by subscription payments. Very few publications have ever succeeded on that limited basis.
Press freedom depends critically on publishers' ability to use free-market practices to mobilize resources sufficient to enable publishing activities to break even, or make a profit. What makes the internet utopian ambition self-defeating is that if utopians got policies tailored to maximize their limitless expressive ambitions, those policies would paradoxically dismantle the practical means to accomplish publishing.
A publisher operating a sluice gate to release a flood tide of swill can expect to sell less advertising. A publisher in command of a near-monopolistic expressive behemoth has power to exclude the swill, but only at the cost of public discontent with the extent of its outsized control over the diversity of opinion which can get published.
Thus, material resources to support smaller-scale publishing activities shrink in inverse proportion to utopian expansion of giantistic publishing enterprises. The bulk of those resources are instead ever-more narrowly hoarded by a few publishing giants.
That is a formula for irresistible political demands for content management by government. It is a result which will foreseeably curtail expressive freedom for everyone. Look around. That is a process already evidently at work.
I am not optimistic that internet utopians will soon recognize their errors, or back off from political demands to make publishers do the impossible. Most such utopians have yet to notice that they depend on private publishers at all. Most suppose inaccurately that they are publishers. Of course none of them cares in the slightest about the plight of smaller publishers they neither understand nor notice.
It will probably take a lot of adverse public experience, and a long time, to get utopian misconceptions out of the way of internet publishing policy reform. I expect a lot more damage to the public life of the nation along the way.
If they were different in nature then internet advertising would presumably not be cleaning the competitive clocks of print media publishers and broadcast publishers.
Your conclusion does not even remotely follow from your premise. The two media are absolutely very different in nature, in multiple ways.
I don't know why you insist on continuing your long-winded bloviating about these sorts of technology issues when you're so fundamentally ignorant about them.
SL just likes to wax poetic (in the loosest possible meaning of poetic) about the good old days when he was "entrusted" with gatekeeping as an editor. SL knows what's worthy for you to read.
That's an irony I hadn't thought of before.
It's a notion of irony undermined by my advocacy for editorial choices which vary among a numerous, diverse, mutually competitive cohort of private publishers. All of whom remain protected by press freedom guarantees to choose according to their own lights whatever content they prefer. A cohort, by the way, which everyone would remain free to join without hindrance.
Whatever controlling impulse you think you see in that, it is a less content-controlling policy than to leave all the choices to a few social media giants. Alternatively, it is also less controlling than to advocate government-enforced content rules served up by political happenstance.
Internet utopians are the ones who demand content-specific enforcement rules. They are willing to put an end to press freedom to get them.
In fairness, it must be added that internet utopians have so little practical insight into the implications of their own advocacy that there is little malevolence in most of them. They do, however, create opportunities for genuinely evil manipulations by corrupt power seekers. Well-organized but spurious campaigns of electoral fraud, anti-vax lies, or Qanon craziness are conspicuous among the field marks of internet utopianism.
That's not it. It's your self-indulgent editorial instincts.
Be concise.
There is a difference between only advertising in Ebony magazine (does that still exist?) which means that most white people won’t see your ads, and saying, “Oh, you’re white? I won’t let you see my ad, period.”
That’s not an accurate description either. Targeted advertising, even of the sort in question, isn’t about preventing anyone from seeing anything. It’s about deciding what content to serve up to whom in order to make efficient use of resources (time, bandwidth, screen real estate, users’ attention, etc). It’s not as though there were some mechanism for someone to request a specific ad, with that request being denied. If the resources were unlimited then there would be no reason for anyone to engage in any form of targeted advertising, and they would simply blast everyone with everything.
Most companies advertise their exact same products/services using other media as well (magazines, newspapers, billboards, etc) where they make no effort to prevent anyone from seeing anything…because there’s no reason to, and every reason NOT to.
From the advertiser's side it's the difference between buying X ad spaces in Ebony magazine and buying X ad spots in 2 dozen magazines including Ebony and WASP monthly and whatever else that may or may not have a high proportion of your targeted demographic.
It's not that you won't be able to see it but that they don't shove it at you if you're not likely to be interested. I'm sure Rachel Dozenal got plenty of ads for products targeting the AA community.
From the advertiser’s side it’s the difference between buying X ad spaces in Ebony magazine and buying X ad spots in 2 dozen magazines including Ebony and WASP monthly and whatever else that may or may not have a high proportion of your targeted demographic.
The analogy fails, as I would assume that if you purchase ad space on Facebook, that it can show up anywhere on Facebook. So, you don't have to pay for "2 dozen magazines", even if your target audience is people who would buy Ebony magazine. Apples and bananas.
Sure it is. The motive may be based on economic interests rather than animus towards the excluded group, but the actual action involves the advertiser literally telling Facebook, "Don't show this ad to people who aren't in category X."
I can't wrap my head around the concept that someone would not only go on Facebook to look for an insurance provider, but to look for it on ads.
If a certain type of ads show up on my Facebook feed, it's because I was just searching for that type of thing.
This sounds like the same sort of "fake" plaintiff that got people mad in 303 Creative.
So they must advertise Depends and Pampers equally based on age?
It may be bad policy, but I think California is entitled to do this. Commercial speech is subject to significantly less first amendment protection than political speech. And advertising is quintessential commercial speech. Prohibitions on overtly discriminatory advertising have been upheld in a number of contexts including some federal civil rights laws, such as the fair housing laws.
Even if the advertising involved here is not covered as a public accommodation for federal civil rights purposes, states are entitled to have broader statutes that cover more. So they can cover this if they want to.
I also agree that Facebook’s active participation in the targeting process takes it outside section 230. I think the complaint’s allegations, e.g. that Facebook requires advertisers to select an age and a gender group to target, makes this case obvious. Its position outside Section 230 is unequivocal. This is an easy case on that score.
Determining the boundaries of what activities constitute active participation, deciding exactly where the boundary is, will require a harder case with more equivocal facts. The opinion hints that merely accepting user targetting orders, without providing tools explicitly requiring and training explicitly encouraging targeting based on prohibited characteristics, might stay within Section 230’s safe harbor.
Commercial speech is subject to significantly less first amendment protection than political speech.
Speech protection has nothing to do with anything I said, nor am I sure what it has to do with the case in general as ad content is not at issue.
I also agree that Facebook’s active participation in the targeting process takes it outside section 230.
I don't disagree with that. But again, it's not relevant to the point I was making...which was regarding whether or not targeted advertising constitutes a denial of "service".
"One surprise: we both think that the platforms' claim of a first amendment "right to curate" is in tension with their claim that they, uniquely among speakers, should have an immunity for that form of speech."
Exactly what I've been saying for some time . . . I'm sure people smarter and more knowledgeable than me have thought about it, but I think this is the first time I've seen it mentioned in a blog post here even with all the many posts on the subject area.
There are, it seems, three absolutist positions.
1. If they want to curate, they don't get immunity.
2. If they want immunity, they don't get to curate.
3. They get get to curate and they have immunity.
IMO there is NFW that a Supreme Court majority will adopt any one position. I suspect that while the majority will decide that the laws are unconstitutional, no majority will agree on the appropriate level of trade-off between the absolutist positions. I can imagine a complete mess of a decision, e.g., (for amusement's sake) BK, NG and AB have a plurality, JR and EK go together , SS and KJ go together, and CT and SA dissent (states' rights plus FYTW).
The first 2 in the list are the same thing. Back to remedial high school logic with you.
Judge 1: they get immunity, ergo, they get no curation
Judge 2: they get curation, ergo, no immunity.
That is the implication of those two absolutist positions. I was thinking in terms of legal outcomes not logic. If I were less than clear, my deepest and most sincere apologies for forcing you to make what is undoubtedly the finest and most well-analysed correction to a post I - and, I suspect, everyone else here - has ever seen.
That is the implication of those two absolutist positions. I was thinking in terms of legal outcomes not logic.
The problem being that those are NOT “two absolutist positions”. They’re simply two different ways of stating the exact same position…which is that immunity and curation are mutually exclusive. Recall that you said you were describing three “absolutist positions”, not three “possible legal outcomes”. Those are very different things.
https://en.wikipedia.org/wiki/Contraposition
🙂
Deleted -- no need to pile on at this point.
I think you guys are being too harsh. I understood SRG2’s absolutist positions to be:
1. Nobody gets curation
2. Nobody gets immunity
3. Everyone gets curation and immunity
where 1 and 2 are the two extremes of the position that “Nobody gets curation and immunity.”
The reason it’s interesting to describe the absolutist positions this way is that it tells you more about the spectrum between those extremes. There’s the anti-curation approach (#1) which attempts to get to a place where social media companies curate less. That’s the right’s current focus (wildly enough for those of us who remember the 80s). Then there’s the anti-immunity approach (#2) which attempts to get to a place where social media companies have less immunity… forcing them to curate more. This (again wildly) is the left’s current focus.
"Then there’s the anti-immunity approach (#2) which attempts to get to a place where social media companies have less immunity… forcing them to curate more."
I think you might be missing what is going on here.
Internet platforms didn't need Section 230 to be immune to liability for user content. They already had that by existing precedent, so long as they weren't moderating.
What you had at the time were the alternatives of,
1) No moderation. Here you're immune because you're just a passive conduit, the posters bear all the liability. It's not your content.
2) Comprehensive moderation. Here you're safe because you're actively removing everything that could result in liability. It's ALL your content.
Section 230 sought to create an intermediate position: Platforms would be able to engage in SOME moderation, of the sort listed, without using their "We're just a passive conduit!" immunity, and being forced to screen everything before it went visible.
Getting rid of Section 230 doesn't force platforms to curate. It forces them to chose between comprehensive curation and not curating at all.
Ideally, I think we ought to, at this point get rid of Section 230 C(2)a, which allows platforms to moderate, and retain Section 230 C(2)b, which allows platforms to provide means for the users to moderate what they see.
Section 230 actually contemplated some mix of highly limited platform moderation, and user designated third party filters. Section D actually requires the platform to inform users of such software!
Regrettably, it didn't require them to cooperate with the function of such software, and the platforms tend to actually work hard to break such products; After all, they make a lot of their money off pushing unwanted content to the users!
Uh…. your #1 and #2 are exactly SRG2’s #1 and #2.
I think you might be missing what is going on here.
There’s a status quo. Some people want to push social media companies to moderate less, i.e. to be more like your extreme option #1. Some people want to push social media companies to moderate more i.e. towards option #2. Some people (presumably including the social media companies themselves) want to give social media companies more freedom to decide for themselves how much moderation to do (that’s extreme option #3), such as highly curated content while retaining immunity.
I, personally, being a traditional lefty with a libertarian sympathy, prefer somewhere around 90% #3 and 10% #1.
And as I have repeatedly explained to you, the latter is not an option. It is not a viable business model for platforms not to curate, which is why every single commercial one does so.
Parler looked like it could make not curating a viable option. They set out to destroy it.
"They" did? I like how any time you come into contact with a fact that doesn't fit your narrative, you automatically jump to a nefarious elite "they" being behind it.
No, rather, Parler proves DN's point. It predictably became an unviable cesspool to the point where it violated Apple and Google's own app store policies.
I understood SRG2’s absolutist positions to be...
...something quite different from what he actually said.
Remedial logic: there is a fourth position:
4. They don't get immunity and they don't get to curate.
FWIW
cjcoats, INWM. You don't get it, but that formula makes internet platforms economically non-viable. It might better be written:
4. They don't get immunity and they don't get to curate, and you don't get to comment on the internet.
You don’t get it
*snort*
It’s for Congress to adapt any of the 3 positions. As I see it, the constitution permits any of them, and it’s a legislative and not a judicial call which policy position to take. Congress could say if you post any content on a social media site, the social media company owns it and can do what it wants with it. It could say the social media company is merely a common carrier of user-owned content and can’t discriminate in offering carriage. You ommitted a 4th position, imposing both common carrier obligations and liability for its customers’ behavior at the same time. While Due Process might prevent that, Congress could do the opposite and impose both absolute control and immunity if it wants to. Bad policy quite possibly, but not unconstitutional.
As always, you ignore the 1A. The government cannot directly tell the New York Times what letters to the editor it must publish, and it can't do it in two steps by declaring the NYT to be a common carrier and then telling it that it is subject to carriage rules as a common carrier.
The government only has limited ability to tell a common carrier of messages which letters not to carry.
The government only has limited ability to tell a phone company that it cannot provide voice common common carriage to a person. In the old days the person, who was denied phone service, could always use a pay phone.
This discussion isn't about the government telling companies which letters not to carry; it's about the government telling companies which letters to carry.
I think you ignore other positions, like TX/FLs:
4. They are not allowed to curate. Therefore, there's no tradeoff.
TX/FLs position is not that they are not allowed to curate. It’s that there will be limitations on said curating, and disclosure requirements for those decisions.
I agree there are other positions, too. Some that could overlap, or not. For example, one could take the position that there shouldn’t be any universal answer and there is no one size fits all solution, and each state should be able to decide for itself.
Social media companies have a dual role. They do some publishing, but they are also a communications network. I previously gave an anology of a merger between the New York Times and a phone company to form a combined, hybrid company managed as a single entity, that connects phone calls and also offers news over the phone. I think the law can conceptually ignore this merger and regulate the 2 businesses separately, and doesn’t have to accept the position that just because it’s organized as a single legal entity the constitution requires regarding it as only one business. Nor do I think the First Amendment requires saying that because it provides news over the phone as one of the things it does, then everything it does, including the things that are just like what a traditional phone company does, has to be classified as newspaper activities for First Amendment purposes.
Let me ask this question. Suppose a user uses Facebook to post messages viewable by a group containing 2 users, that user and one other. The communications activity involved is between two people, so I’d say that that’s functionally indistinguishable from a letter, telegram, phone call, email, or pm. It’s the same activity, just being conducted on a different technology platform. Or at least legislatures are entitled to so classify it.
Would you agree that common carrier status can be imposed for this part of Facebook’s business. You have to admit this happens on Facebook. And it’s a substantial part of all the posts on it, even more so if you go from just one-on-one and allow for small groups. Or do you insist that the First Anendment gives Facebook a constitutional right to “curate” even the one-on-one communications activities between individual users that occur on its platform?
Facebook already offers a person-to-person communications service; it’s called Facebook Messenger. And while it would be a really stupid policy to treat it as a common carrier, I don’t think the 1A would protect Facebook’s decisions with regards to use of that service. (Section 230 currently does, of course.) But that’s because Messenger is a discrete service, separate from the social media publishing service. That’s a very different situation than your “what if someone posted something on Facebook but only one person read it?” hypothetical.
That’s a very different situation than your “what if someone posted something on Facebook but only one person read it?” hypothetical.
It would be, if that were an accurate paraphrasing of his hypothetical…bit it isn’t. Here’s what he actually said:
“Suppose a user uses Facebook to post messages viewable by a group containing 2 users, that user and one other.”
He’s saying that messages to the group are only viewable by 2 users, not that only 1 other person might read messages posted to it. As he said, that makes such messaging functionally indistinguishable in that regard from the other person-to-person communication methods he listed, in that only the 2 people involved in the exchange have access to its contents.
Yeah, I was a member of a FB private group that had only, IIRC, five members. We eventually had to leave FB for MeWe because our private discussions evidently offended some busybody that FB had empowered to look into such private groups.
Wasn't entirely clear even what offended them, they were apparently under no obligation to disclose that. If was enough that they WERE offended, and FB had empowered them to lock our group on their own initiative.
There are other differences between a three-person Facebook group and a three-way text message:
1. The three-person Facebook group's roster could change in the future
2. The three-person Facebook group has advertising alongside the posts
3. The three-person Facebook group generally has a shared view of the content, whereas the three-way text message has a personal view (if you don't know what this means you haven't thought about it enough)
From a technical perspective, which these days matters more than you might imagine, the three-person Facebook group's content is stored at Facebook, whereas the three-way text message is only stored on those three people's devices.
"From a technical perspective, which these days matters more than you might imagine, the three-person Facebook group’s content is stored at Facebook, whereas the three-way text message is only stored on those three people’s devices."
This probably explains why, if I access FB from a new device, I can't look at old messages.
Oh, wait, I can. You were saying?
I think you've got FB Messenger confused with phone text messages, which indeed only pass through the phone system, without being retained. (Without having to be retained, anyway; I wouldn't be shocked if the phone companies retain them anyway.)
text messages, which indeed only pass through the phone system, without being retained
Technically (and this is a bit of a nit, to be sure) those messages are stored at one or more hops along the way, but usually very temporarily. If the target user's device is out of service for some reason the message must obviously be stored until that device reestablished a connection to the network, but it is generally disposed of after the text has been delivered.
That's why I said text messages, not FB Messenger messages. The same principle applies to FB Messenger, but it's more complex technically. You can think of it as Facebook storing the messages on your behalf -- usually encrypted.
That’s why I said text messages
You also said "3-way" examples of each, which is not what anyone was talking about.
Wow, what an amazing reading comprehension you have.
Now try to imagine why the author may have made the choices he did.
Now try to imagine why the author may have made the choices he did.
Inability to follow the conversation he's commenting on?
From a technical perspective, which these days matters more than you might imagine, the three-person Facebook group’s content is stored at Facebook, whereas the three-way text message is only stored on those three people’s devices.
It's cute that you feel the need to explain things that you have no understanding of.
Until SCOTUS provides negativing guidance, in order to deny service to blacks, a restaurant could require a reservation by means of a discriminatory social medium platform, which excluded blacks. No one has filed a complaint against such a practice because the practice is hard to detect.
The federal judiciary has used 47 U.S. Code § 230 to strip Americans both of their constitutional rights and also of their protections against discrimination even though the US Internet is substantially government funded. SCOTUS can stop this erosion of rights and protections by holding that the Internet belongs to the class of “Establishments affecting interstate commerce or supported in their activities by State action as places of public accommodation.” The preceding clause does not require the Internet to be a place of public accommodation but only requires the Internet to be like or as a place of public accommodation. The same logic desegregated a public drinking fountain, which is an establishment supported by state action as a place of public accommodation for drinking water. A public drinking fountain is a facility or device (a valve on a water pipe) and not a place that one can enter. A 2023 social medium platform is located within the Internet because it is located by connection and by IP address within the premises of the Internet. Premises includes grounds and appurtenances, which include wiring.
My thoughts.
1. Not to beat an old dead horse, but I'm not convinced that the 14th amendment was intended to incorporate others against the States, thereby instituting government by federal judicial fiat over vast spheres of public policy.
2. Laws like the ones at issue here seem like nothing more than mild, consumer protection act-type regulations, and product regulations. And as such are broadly Constitutional. That's just my initial reaction, a stab in the dark, to what actually seems like a quite complicated subject. Perhaps the same logic could be used to mandate ostensibly "private" censorship in ways I would think are problematic.
I’m sure everybody remembers how we were all silenced before Facebook came along, which is why it is imperative to protect our constitutional right to post anything we want to on social media websites.
I recall Mr. Facebook declining to censor harrassing tweets...if they were by politicians, under the argument The People need to see what their candidates say, if democracy is to have any meaning, and being raked over it.
Is that what you meant? If facebook didn't censor, it was over the objections of people like you, to the extent of being called to stand tall before Congress and explain himself.
I assume you are full throated in favor of it, unlike your progressive bretheren going back over a century, to Teddy Roosevelt’s time, where he wrote there was a problem with corporations that got so dominant they wielded power properly only held by democracy itself, a progressive virtue lo these generations until the section 230 sword of Damoclese successfully brought them to heel, censoring as the politicians desired.
If I am wrong, I apologize.
I just block someone that harasses me. Facebook is a common carrier of messages. Facebook is a bailee of a message that Facebook is to transport to a destination user. The message is bailment in a backend server that belongs to Facebook. I don't request Facebook to transport a message of a blocked user to my computing device.
Your comment has nothing to do with this case.
And that justifies Facebook curating content?
The France-vs.-Yahoo contretemps in 2000 made it abundantly clear that platforms should have actively avoided doing business in the EU, and then told the continent that gave the world the Inquisition, the Reign of Terror, and fascism that if it wanted to block content it didn't like, it would have to build its own Berlin Firewall.
My sympathy for the platforms now is strictly limited by the fact that they were too stupid and greedy over the last quarter-century to properly insulate themselves from the blatantly obvious inevitable.
So who determines what is and isn't "disinformation"?
You would need an independent arbiter to determine. This independent body would be responsible for administering the determinations with regard to the misses and the disses, and so forth, of the information out there. A ministry of truth, if you will.
I found this quote from Baker interesting :
“I point out that elites all across the globe have rallied to meet and defeat social media’s challenge to their agenda-setting and reality-defining authority. India is aggressively doing the same.”
He then links two stories from a WaPo series on India these past few days. Those stories show social media campaigns in India that service the ruling party’s anti-Muslim campaign with vitriolic hate and fabricated propaganda.
So who are the “elites” in Baker’s tidy little formulation? If the tech companies, there’s near zero evidence of any rally “to meet and defeat social media’s challenge to their agenda-setting and reality-defining authority” in these stories.
The hate groups ran their ugly content unchallenged. YouTube stars live-streamed as they beat Muslims and nothing was taken down. No one was banned over months of repeated complaints.
People were murdered during riots, so I guess that particular “social media challenge” to the elite’s “reality-defining authority” was a smashing success! The Modi government sure thought so, since they coddled and protected these media stars.
All of which suggests Baker should ease up on the trite formulations or find better links.
I thought it was pretty clear that the ultimate target of the articles was the Modi regime, which has largely displaced the old Indian elite built around the Congress party. The burden of the articles is that the (evil!) Modi regime is bending social media to its will -- i.e., to "meet and defeat social media's challenge to [its] ... reality defining authority." I'm quite confident that in Modi's reality, the vigilantes are defending traditional Hindu values. I think your error comes from assuming that when I say "elites" I always mean social media companies; they are part of a global knowledge economy elite, but they are effectively opposed by hostile elites in Russia, China, India. And they are sufficiently divergent from elite opinion in Europe and Canada that they are being regulated into submission. Clear enough?
It's pretty hilarious to see Baker arguing in the same post that the social media platforms shouldn't have the right to make their own content curation decisions and then also that is terrible that they might be forced by other countries to make decisions that he disagrees with.
Unless removal is legally required, a common carrier like a 2023 social medium platform should not remove messages, of which it is bailee and which are stored in its backend server.
Nothing hilarious about that. It's perfectly consistent.
The platforms shouldn't be allowed to freely decide to censor their users' content, and shouldn't be forced to censor their users' content. Where's the conflict? It's anti-censorship in both cases.
The conflict is that the first argument is that the government, not the private company, should decide what content to distribute, and the second argument is that the private company, not the government, should decide what content to distribute.
The conflict is that the first argument is that the government, not the private company, should decide what content to distribute, and the second argument is that the private company, not the government, should decide what content to distribute.
You're really batting .000 lately. "You can't censor" "government deciding what content to distribute".
You’re really batting .000 lately. “You can’t censor” “government deciding what content to distribute”.
...should be...
You’re really batting .000 lately. “You can’t censor” does not equal “government deciding what content to distribute”.
Unless you're retarded enough to think that social media could function with no moderation at all, then somebody's got to do the moderation, either the government or the company.
Yes, we're 'retarded enough' to think social media could function the way social media used to function.
The whole reason for Section 230 was that social media at the time wasn't functional. In fact check it out, it's still around, wrapped in Google Groups for some reason...
https://groups.google.com/g/alt.politics.libertarian
Pretty low traffic, because it's terrible. These are the current top "stories" totally unedited:
Bradley K. Sherman --> Re: Hitler had his good points
Harris Slut --> Re: Joe Biden in 1987: 'We (Delawareans) Were on the South's Side in the Civil War'
Harris Slut --> Re: Joe Biden lied about getting an award from George Wallace
Obama has monkeypox --> Re: A White Man Speaks. Blacks Don't Listen And Get Shot.
transgenderism is heterophobia --> Re: 5 child rapists shot dead in CO. Why are we mourning their death?
Kamala Stupid - 400 years of slavery --> Re: Hannity Asks "Is Obama To Blame For Why Biden Is A Failure, Is The Most Hated President In History? Or Should We Blame The Clinton Pedophiles?"
Democrat Run Cities --> Re: Man killed at party on far West Side, police searching for black suspect
Democrat Run Cities --> Re: 2 dead, 5 others hospitalized in drive-by shooting at family BBQ on Southwest Side, police say
Kamala Stupid - 400 years of slavery --> Re: Bill Clinton Impregnated His Own Daughter Chelsea
Obama has monkeypox --> Re: Proof That We Don't Need Democrat Ignorance - Into The Fires With Them!
Not exactly the sort of place anyone with any sense would go.
So, your definition of "not functional" is, "somebody got to say something I didn't like"?
So, your definition of “not functional” is, “somebody got to say something I didn’t like”?
He's just making stuff up as he goes along.
No, it's too low of a signal to noise ratio to be worth consuming.
Unmoderated social media is demonstrably terrible compared to moderated social media. This isn't, like, theoretical. You can try it yourself, or you can observe how much more popular moderated social media is compared to unmoderated.
Therefore, it's retarded to think that the solution is to return to unmoderated social media. Therefore, it's a contradiction to be critical of both government moderation and private moderation of social media (unless retarded).
Have you noticed, maybe, that Reason's comment threads are essentially unmoderated, in the way we're discussing here, and you use them?
I used FB back in the days when it had no moderation beyond the "We take down content if we get a court order", and it was quite functional. In fact, all it's growth took place before it instituted the current system of censorship.
Part of the reason for that, is that this was before they started pushing content on you unasked for. You only saw stuff from people you were following, and if you didn't want to see it, you could just unfollow them. That private group I was a member of? It was invitation only, and nobody in it ever found anything objectionable, and it wasn't moderated until FB decided to give some busybody the key to every private group, and moderation powers over them.
Push systems of social media need moderation, because the user isn't choosing what they see. Pull systems don't.
I’m glad you’re starting to think about why moderation is necessary. Facebook wanted to become more viral, and the way you do that is by making it easier for content to get around. Push vs. pull is one way of thinking about it, but there are others, such as public vs. private, and unsolicited vs. invite-only, and shared vs. personal. A totally locked down communication system -- pull private invite-only and personal -- is hardly a social network. That’s what FB Messenger is for, and it doesn’t need to be moderated. What makes something a social network is the social part, like going to a bar or a party. Going to a bar or a party is only fun if the venue is doing a good job of keeping out the riff raff.
I explained elsewhere why your private Facebook group is still more like a table at a bar than it is like a telephone call.
As for Reason, it’s relatively unmoderated by today’s standards, but it’s pretty heavily moderated by the standards of when Section 230 was drafted. That is, it too would get wiped out by the repeal of Section 230. Also, yes I use it, but I’m hardly the masses. Please take note of the relative user base sizes of Reason and Facebook.
(Reason actually tried to block this comment, who knows why… I had to do the dummy-comment-and-edit trick to get it to post.)
Hello world
A common carrier of messages can always refuse to carry a message that is not protected under the 1st Amendment or that is unfit in some opinion-neutral way.
This statute describes a letter that is unfit for the USPS to carry.
18 U.S. Code § 1461 – Mailing obscene or crime-inciting matter
Every obscene, lewd, lascivious, indecent, filthy or vile article, matter, thing, device, or substance; and—
Every article or thing designed, adapted, or intended for producing abortion, or for any indecent or immoral use; and
Every article, instrument, substance, drug, medicine, or thing which is advertised or described in a manner calculated to lead another to use or apply it for producing abortion, or for any indecent or immoral purpose; and
Every written or printed card, letter, circular, book, pamphlet, advertisement, or notice of any kind giving information, directly or indirectly, where, or how, or from whom, or by what means any of such mentioned matters, articles, or things may be obtained or made, or where or by whom any act or operation of any kind for the procuring or producing of abortion will be done or performed, or how or by what means abortion may be produced, whether sealed or unsealed; and
Every paper, writing, advertisement, or representation that any article, instrument, substance, drug, medicine, or thing may, or can, be used or applied for producing abortion, or for any indecent or immoral purpose; and
Every description calculated to induce or incite a person to so use or apply any such article, instrument, substance, drug, medicine, or thing—
Is declared to be nonmailable matter and shall not be conveyed in the mails or delivered from any post office or by any letter carrier.
Whoever knowingly uses the mails for the mailing, carriage in the mails, or delivery of anything declared by this section or section 3001(e) of title 39 to be nonmailable, or knowingly causes to be delivered by mail according to the direction thereon, or at the place at which it is directed to be delivered by the person to whom it is addressed, or knowingly takes any such thing from the mails for the purpose of circulating or disposing thereof, or of aiding in the circulation or disposition thereof, shall be fined under this title or imprisoned not more than five years, or both, for the first such offense, and shall be fined under this title or imprisoned not more than ten years, or both, for each such offense thereafter.
The term “indecent”, as used in this section includes matter of a character tending to incite arson, murder, or assassination.
Everybody knows that the Comstock Act has lost almost all of its potency over time.
Anyway, it doesn't matter. Actually illegal communications that don't even get 1A protection aren't the problem that social media faces.
The Comstock Act was recently discussed in the NY Times. See What to Know About the Comstock Act.
The Comstock Act, which references 39 U.S. Code § 3001, declares materials unfit to be mailed in a non-discriminatory opinion-neutral fashion just as 47 U.S. Code § 230 does: “obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable, whether or not such material is constitutionally protected.”
If a 2023 social medium platform, which is a common carrier of messages, actually removed messages in an opinion-neutral fashion according to § 230, there would be no legal reason to object, but a 2023 social medium platform denies common carriage of messages to large swaths of the public on the ground of politics, race, religion, ethnicity, national origin, and gender even though the Internet is massively subsidized by the US government and a 2023 social medium platform owns very little of the Internet.
The Comstock Act is probably over the top, but it is over the top in a non-discriminatory opinion-neutral fashion. If the Comstock Act specified that LSD could not be mailed, the unfitness of LSD for mail would be non-discriminatory and opinion-neutral. The Comstock Act can be repealed by legislators, who act on behalf of the public. The decisions of a 2023 social medium platform cannot be repealed even though these decisions are illegal according to the CRA, Title 47, and the laws of many states. Every 2023 social medium platform must be prosecuted for its crimes and violations to the fullest extent of the law.
Don't be a dork. They pay for access to the Internet just like everyone else.
I agree the Internet should remain common-carrier-like. That's what Net Neutrality is for. I bet you don't even like Net Neutrality. What a maroon.
You can put whatever you want on the Internet. So can Facebook. So can Truth Social. Really, if we got rid of Section 230, the #1 victims would be right-wing media. To them, "otherwise objectionable" means anything to the left of Trump's right hand.
By the principle of ejusdem generis, otherwise objectionable must be applied in an opinion-neutral fashion because all the listed items are opinion-neutral.
I don't have a problem with net neutrality, but net neutrality is beyond common carriage law. A common carrier may have a plurality of tiers of service.
A 2023 social medium platform uses far more of the Internet than the tiny piece it owns. A 2023 social medium platform also uses all end user computing devices on which a program of the 2023 social medium platform runs.
To make the issue simple — Comcast owns its network and only delivers content it owns or obtains via license from content providers.
A 2023 social medium platform pays for access just as I do. Neither of us owns the US Internet, which is substantially (maybe mostly) owned by the US government and whose operation is substantially (maybe mostly) funded by the US government.
A 2023 social medium platform is a common carrier of messages, and transports everyone’s content. In violation of common carriage laws, a 2023 social medium platform discriminates against huge swaths of the public.
Net Neutrality allows for tiers of service. It just doesn't allow those tiers to be discriminatory based on speaker or content. Very much similar to common carriers.
The fact that social media companies don't own the Internet should tell you all you need to know about how silly it is to apply common-carrier status to them. What network do they own over which they can act as a common carrier? They aren't like Internet or telephone companies at all. You’re paying for service from your ISP, not Facebook. Your ISP is the common carrier. Facebook owes you nothing.
Barter for common carriage and work for common carriage have for centuries been recognized fees for common carriage.
Trucking common carriers don’t own the roads.
An ordinary Internet email service like Gmail and a 2023 social medium platform both meet the traditional definition of a telegraph service: a service that transmits a message electrically by wire or by wireless means.
All three are common carriers of messages and should obey common carriage law.
The whole point of net neutrality is elimination of service tiers.
Net neutrality
Net neutrality is the practice of keeping Internet service providers from offering tiered service and controlling the ability to block out competition by restricting certain pipelines within the Internet. By blocking these pipelines, the provider creates an unfair transfer of packets across the Internet, diminishing the quality of service. Internet service providers seek to discriminate against peer-to-peer (P2P) communication, FTP, online games, and high bandwidth activities, such as video streaming.[10] This practice is called bandwidth throttling.
In 2017, the FCC Voted to repeal "Net Neutrality" in their "Restoring Internet Freedom" Order.[11] Fulling taking effect on June 11, 2018, the initiative removed barriers of the Title II regulations that had been placed on the Internet Service Providers in 2015. Due to the repeal, Internet Service Providers can initiate tiered internet services and are no longer required to treat all online traffic as equal.[12] With the removed regulations, Internet Service Providers can move forward with creating tiered internet services. Proponents of the repeal argue that the tiered internet service will allow for increased innovation in the internet. Detractors argue that it will create anti-consumer measures that crowd out emerging businesses and create a bundling system that is not within consumer preference.[13]
That's what I said. Net Neutrality allows for tiers of service like 500Mb vs. 1Gb, but it doesn't allow for discriminatory tiers of service like the Right Wing Media Package that blocks MSNBC and the Washington Post.
I am not sure that you understand that an ISP operates at the IP layer and does not deal with the messages that are being carried in documents within the HTTP data stream.
I do understand that. But even IP packets are able to be identified as being from a particular source like MSNBC or Truth Social. Net Neutrality is all about not treating IP packets differently depending on their source / destination / etc.
Unless you’re retarded enough to think that social media could function with no moderation at all, then somebody’s got to do the moderation, either the government or the company.
Many instances of social media have functioned that way for decades. Are you too retarded to be aware of that? That said, you clearly didn't understand what you're responding to...as is so often the case.
I mean, actually, it does. Setting aside your misuse of the word ‘censor,’ “You can’t censor [sic]” = “You must distribute.”
And the government telling a company what it must distribute = the government deciding what content to distribute.
Well, I'll admit there's some overlap between "You can't censor" and "government deciding what content to distribute", but it's nowhere near 100%. Maybe 10%, at most.
For one thing, "you can't censor" excludes the "you can't distribute that" part of "government deciding what content to distribute".
It's like saying "you can't rape" is the government deciding for you who your sex partners will be. Sure, some minor overlap, but conceptually it's entirely different.
JUST AS I SAID two days ago..
Search engine optimization or political bias? Biden challengers nearly nonexistent in Google results
Biden and token Democratic candidate Marianne Williamson place higher than Republicans in searches for GOP candidates. SEO expert offers nonpartisan explanations.
So, logically, SOMEONE is interfering. Someone