The Volokh Conspiracy
Mostly law professors | Sometimes contrarian | Often libertarian | Always independent
Common Carrier Status as Quid Pro Quo for § 230(c)(1) Immunity
The final substantive excerpt from the First Amendment section of my Social Media as Common Carriers? article (see also this thread); recall that the key First Amendment arguments are in this post, which relies on the PruneYard, Turner, and Rumsfeld precedents, and in this one, which explains why Miami Herald, Hurley, and the various other "common theme" precedents don't apply.
[* * *]
For all these reasons, I think Congress could categorically treat platforms as common carriers, at least as to their hosting function. But Congress could also constitutionally give platforms two options: (1) Be common carriers like phone companies, immune from liability but also required to host all viewpoints, or (2) be distributors like bookstores, free to pick and choose what to host but subject to liability (at least on a notice-and-takedown basis).[264]
Indeed, this would just return the law to something close to the pre-§ 230 common-law rules, as modified by the First Amendment protections that have developed starting with New York Times v. Sullivan. Historically, American law has divided operators of communications systems into three categories—publishers, distributors, and conduits—and has set up different standards of liability for each.
Class | Examples | Rights | Liability |
Publishers | Newspapers, magazines, and broadcasters, which themselves print or broadcast material submitted by others (or by their own employees). [265] | Free to choose what to include. | Fully liable for material they include, as for their own speech.[266] |
Distributors | Bookstores, newsstands, and libraries, which distribute copies printed by others;[267] likely also property owners on whose property people might post or write things.[268] | Free to choose what to distribute. | Liable on what we might today call a notice-and-takedown basis.[269] |
Conduits | Telephone companies, cities on whose sidewalks people might demonstrate, or broadcasters running candidate ads that they are required to carry. | Generally legally forbidden from choosing what goes on their property.[270] | Not liable at all.[271] |
The two pre-§ 230 Internet libel decisions, Cubby v. Compuserve, Inc. and Stratton Oakmont, Inc. v. Prodigy Services Co., seemed to roughly distinguish services that edited, which were treated as publishers and were thus potentially legally liable for others' material that they didn't edit out, from services that didn't edit, which were treated as distributors and were thus at least largely immune.[272] The Cubby / Stratton Oakmont results encouraged providers not to restrict speech in their chat rooms and other public-facing portions of their service: If they were to try to block or remove vulgarity, pornography, or even material that they were persuaded was libelous or threatening, they would lose their protection as distributors, and would become potentially strictly liable for material their users posted. At the time, that looked like it would be ruinous for many service providers (perhaps for all but the unimaginably wealthy, will-surely-dominate-forever America Online).
Congress, then, chose to reject the Cubby / Stratton Oakmont approach, and instead to deliberately provided conduit immunity to all entities—including those that, unlike traditional conduits, could and did select what user content to keep up. It did so precisely to encourage (though without requiring) conduits to block or remove certain speech, by removing a disincentive (loss of immunity) that would have otherwise come with such selectivity. It gave them this flexibility regardless of how the entities exercised this function. And Congress chose conduit liability (categorical immunity) rather than distributor liability (notice-and-takedown immunity). Online sites thus had the best of both worlds: the selection power of distributors, but the liability of conduits.[273]
But that was Congress's decision in 1996. It's not set in stone, and not constitutionally mandated. Publisher and distributor liability is consistent with the First Amendment, despite the chilling effect it might sometimes create, so long as it complies with the New York Times v. Sullivan / Gertz v. Robert Welch rules immunizing honest mistakes (or sometimes just reasonable mistakes).
If Congress wants to return to a world where social media immunity for libel (and other torts) turns on whether social media platforms act as common carriers, it can do so.[274] I'm not at all sure that's the right approach as a policy matter, especially since immunity from tort liability has helped many small and midsized online platforms thrive, and those platforms' editorial power has often been valuable. But it does reflect an important practical reality: Immunity from tort liability is what also helped the major platforms become so big, powerful, and capable of influencing public debate—thus helping create the problems to which common carrier status might be a solution.[275]
This sort of conditional immunity might also apply to platforms' recommendation function and conversation functions. (Again, I'm not certain that such conditional immunity is a good idea, but here I'm speaking only of the constitutional question.) As I mentioned, platforms have a First Amendment right to choose what to recommend, just as newspapers have such a right. But it doesn't follow that they have complete First Amendment immunity from (say) defamation liability when they recommend something that proves to be libelous.[276] And Congress can offer this extra immunity but only for recommendations or conversation management actions that operate in viewpoint-neutral ways.
Some such conditional benefits may violate the unconstitutional conditions doctrine. But I tentatively think that a narrowly crafted and viewpoint-neutral condition such as this one would be constitutional.
When the government offers speakers a benefit not mandated by the First Amendment, it can generally attach conditions to that benefit, so long as the speakers remain free to say what they want when they aren't using the benefit. Thus, for instance, the government may provide that charitable contributions to advocacy groups are tax-deductible, but only if those groups don't use such contributions for electioneering, so long as the groups remain free to engage in electioneering using non-tax-deductible funds.[277] The government may set forth "conditions that define the limits of the government spending program—those that specify the activities [the government] wants to subsidize." [278] The conditions become unconstitutional only if they "seek to leverage funding to regulate speech outside the contours of the program itself."[279]
I think the same would apply to subsidies that come in the form of financially valuable immunity from tort law claims, and not just to subsidies in the form of financial grants or tax deductions. Congress can't say to a platform, "so long as any of your actions are immune under § 230(c)(1), you must accept restrictions on all your First-Amendment-protected activities, even if those don't take advantage of the immunity": That would be unconstitutionally "leverag[ing] funding to regulate speech outside the contours of the program." But it can say, for instance, "if you want § 230(c)(1) immunity for your decisions about which material to recommend—so that if you recommend something defamatory, for which you might be held liable under standard defamation principles, you would be immunized from such liability—that will only be available for recommendations that you make in a viewpoint-neutral way."
[264] Cf. Tushnet, supra note 36.
[265] Restatement (Second) of Torts § 578 (1976).
[266] A newspaper, for instance, can be sued for libel in a letter to the editor. See id. In practice, there is some difference between liability for third parties' speech and for the company's own—a newspaper would be more likely to have the culpable mental state for the words of its own employees. But, still, publishers are pretty broadly liable, and have to be careful in choosing what to publish.
[267] Id. § 581.
[268] Hellar v. Bianco, 111 Cal. App. 2d 424, 425 (1952) (dealing with "libelous matter [written on a tavern restroom wall] indicating that appellant was an unchaste woman who indulged in illicit amatory ventures"; "[t]he writer recommended that anyone interested should call a stated telephone number, which was the number of the telephone in appellant's home, and 'ask for Isabelle,' that being appellant's given name").
[269] A bookstore, for instance, isn't expected to have vetted every book on its shelves, the way that a newspaper is expected to vet the letters it published. But once it learns that a specific book included some specific likely libelous material, it can be liable if it kept selling the book. See Restatement (Second) of Torts § 581; Janklow v. Viking Press, 378 N.W.2d 875 (S.D. 1985).
[270] Phone companies are common carriers. Cities are generally barred by the First Amendment from controlling what demonstrators say. Federal law requires broadcasters to carry candidate ads unedited. 47 U.S.C. § 315(a). New York's high court adopted conduit immunity in 1999 for e-mail systems, even apart from § 230; though e-mail services are not legally forbidden from excluding certain messages based on viewpoint, the court stressed that their "role in transmitting e-mail is akin to that of a telephone company, which one neither wants nor expects to superintend the content of its subscribers' conversations." Lunney v. Prodigy Servs., 94 N.Y.2d 242, 249 (1999).
[271] For instance, even if a phone company learns that an answering machine contains a libelous outgoing message, and does nothing to cancel the owner's phone service, it can't be sued for libel. See Anderson v. N.Y. Telephone Co. 320 N.Y.2d 746, (1974); Restatement (Second) of Torts § 612. Likewise, a city can't be liable for defamatory material on signs that someone carried on city sidewalks (even though a bar could be liable once it learned of libelous material on its walls), and a broadcaster can't be liable for defamatory material in a candidate ad. Farmers Educ. & Coop. Union v. WDAY, Inc., 360 U.S. 525, 528–29, 531 (1958).
[272] Cubby held that ISPs (such as Compuserve) were entitled to be treated as distributors, not publishers. Stratton Oakmont held that only ISPs that exercised no editorial control (such as Compuserve) over publicly posted materials would get distributor treatment, and service providers that exercised some editorial control (such as Prodigy)—for instance, by removing vulgarities—would be treated as publishers. Cubby v. Compuserve, 776 F. Supp. 135, 140–41 (S.D.N.Y. 1991); Stratton Oakmont, Inc. v. Prodigy Servs. Co., No. 31063/94, 1995 WL 323710, at (N.Y. Sup. Ct. May 24, 1996).
Neither considered the possibility that an ISP could actually be neither a publisher nor a distributor but a categorically immune conduit, perhaps because at the time only entities that had a legal obligation not to edit were treated as conduits. And Stratton Oakmont's conclusion that Prodigy was a publisher because it "actively utilize[ed] technology and manpower to delete notes from its computer bulletin boards on the basis of offensiveness and 'bad taste,'" is inconsistent with the fact that distributors (such as bookstores and libraries) have always had the power to select what to distribute (and what to stop distributing), without losing the limited protection that distributor liability offered. Id. at *4.
[273] See Tushnet, supra note 45, at 1010 n.100 (distinguishing "equal" "treatment of privilege and liability" for traditional conduits like telephone companies from "disjoined" treatment for Internet intermediaries).
[274] Nor is there an unconstitutional conditions problem here, just as there wasn't with the Solomon Amendment in Rumsfeld v. FAIR: As I've argued above, Congress has the power to make social media platforms common carriers generally, so it has the power to make them common carriers if they want the benefits of immunity.
[275] Cf. Balkin, supra note 91, at __ (suggesting that intermediary immunity for social media platforms could be conditioned "on accepting obligations of due process and transparency," though concluding that it would be a bad idea to require viewpoint-neutral moderation). But see Eric Goldman, Want to Kill Facebook and Google? Preserving Section 230 Is Your Best Hope, Balkinization, June 3, 2019 (arguing that § 230(c)(1) immunity, by "reduc[ing] barriers to enter the online republishing marketplace," "keeps the marketplace open for the next generation of startups that hope to usurp the current Internet giants").
[276] Of course, they would have the immunity secured by New York Times Co. v. Sullivan, 376 U.S.254, 282-83 (1964), Curtis Publishing Co. v. Butts, 388 U.S. 130 (1967), and Gertz v. Robert Welch, Inc., 418 U.S. 323 (1974), for speech about public officials, public figures, and private figures (on matters of public concern), respectively. But even the strongest such immunity allows liability for speech conveyed with knowledge that it's false or likely false; it doesn't amount to categorical immunity regardless of mental state.
[277] Regan v. Taxation with Representation of Wash., 461 U.S. 1983 (1997). This is generally done by allowing groups to set up two separate affiliates, a 501(c)(3) that takes tax-deductible contributions but doesn't electioneer, and a 501(c)(4) that electioneers (so long as supporting or opposing candidates isn't its primary activity) but doesn't take tax-deductible contributions. See IRS, Operational Requirements: Endorsing Candidates for Public Office (updated Dec. 8, 2020), https://perma.cc/ZWX4-X8L6.
[278] Agency for Int'l Dev. v. All. for Open Soc'y Int'l, 570 U.S. 205, 214 (2013).
[279] Id. at 214–15.
Editor's Note: We invite comments and request that they be civil and on-topic. We do not moderate or assume any responsibility for comments, which are owned by the readers who post them. Comments do not represent the views of Reason.com or Reason Foundation. We reserve the right to delete any comment for any reason at any time. Comments may only be edited within 5 minutes of posting. Report abuses.
Please
to post comments
The tech billionaires kowtow to get richer in the Chinese Commie market. They are servants of the Chinese Commie Party. They should be seized in civil forfeiture for the billions of federal crimes committed on their platforms, and for the millions of federal crimes they commit themselves. Any support for these platforms is support for their puppet master, the Chinese Commie Party.
You're an actual crazy person and an awful authoritarian.
Hi, Queenie. Not a lawyer. Have a blessed day, Hon.
What about a sort of "quasi-common-carrier" status for social media (slightly *short* of full 1st-amendment protection...), whereby there's a strict ban on BIAS while still allowing moderation where there's a good faith concern about libel/threats/etc.? The key to stopping the slippery slope there, I think, is to have any such exceptions apply only to things that essentially EVERYONE* is against (to guard against ideologically-driven redefinitions of things like "unsafe," "bullying," etc.).
*The burden here would be on the moderator to show that "no significant portion of the population" currently disagrees with the standard being applied. (Again, this is not the same as 1A protection, which would apply elsewhere...)
I still think there's room to interpret the existing 230 this way, but if not, it seems like an option for Congress to amend it...
"any such exceptions apply only to things that essentially EVERYONE* is against "
This is kind of the most modern conservative thing EVER (because they assume they would never fall into the category, FYIGM).
No, I fully recognize that the culture could indeed shift so much that some conservative positions/ideas become not only the minority - but rather truly fringe.
But that's what it should *require*, at a minimum, to even *consider* moderating content. If a view is held by 40% of the population today, for example, it simply cannot be treated as "beyond the pale" or the like. But if you can get it down to 1%, then maybe you can moderate it in certain contexts. This forces people to *persuade* (rather than coerce) their way to broad consensus.
I'm an absolutist where the 1st amendment applies, but in places where it doesn't, there should at least be a broad consensus against something before it's even on the table to consider restricting it. (This doesn't mean everything that goes against societal consensus should be censored, necessarily - just that it should be a *prerequisite* to even *considering* it. I actually think in most cases even truly fringe views should be tolerated.)
If I have to be an outcast because you convince almost everyone I'm wrong/bad/etc., that's one thing. But I shouldn't have to be an outcast because you convinced half the population to silence/scare the other half...
That's classic conservative privilege thinking: stick it to those minorities, but a lot of people agree with me so how dare you mess with me!
Classic left wing mind reading, provides no proof.
"That’s classic conservative privilege thinking: stick it to those minorities, but a lot of people agree with me so how dare you mess with me!"
Please the the first sentence in the comment to which you were responding (explicitly acknowledging that conservatives could be on the losing end of a consensus at times).
Overall, that comment fully recognized that anyone on any side of any issue could find themselves being on the wrong side of a societal consensus. I was simply saying that only a consensus should have the power to impose anything (in ANY direction).
"silence/scare the other half…"
How in the world can one half 'silence/scare the other half?'
Minorities have been 'scared/silenced' for the bulk of US history but *now* that *one half* of the nation thinks the other half doesn't want anything to do with them it's a *crisis?*
Could you do anything more to make CRT seem reasonable?
Your "shoe-is-on-the-other-foot" argument is simply tiresome. How hard is it to acknowledge that two wrongs don't make a right? Nobody should be / should ever have been scared or silenced for expressing views that dissent from the left OR right.
Whether it's CRT or any other issue...make your case, and then ALLOW DISSENT - especially if you don't even have a consensus.
I'm happy to learn about CRT and hear anyone out on that or any other progressive concept. But in the end, if I don't agree, we should simply go our separate ways and you don't get to ruin my life over it... Simple as that.
I think what Cheerio is arguing for is to limit liability protection to things which are not protected by the First Amendment (or perhaps the moderator has a good-faith belief isn’t protected.)
So Nazi paraphenalia, ordinary pornography, depictions of violence couldn’t be moderated out. But obscenity, libel, threats, crime solicitation, and other recognized First Amendment exceptions could be.
It might have difficulties as in fact there isn’t true consensus in society on many First Amendment issues. (What’s obscene? Is everything that makes someone feel threatened a threat?) And I think Chimera may have mischaracterized it for that reason. The scope of First Amendment protection comes from court cases, not public consensus. The addition of reasonable belief or a similar safe harbor would add a penumbra and create additional issues perhaps analogous to qualified immunity.
But I think it’s a coherent position.
I see your point about how messy it could get...I'd say the burden would always be on the moderator to argue it *wasn't* protected, but they'd get a generous safe harbor for good-faith belief (yes, perhaps analogous to qualified immunity).
Simply using established 1st-amendment exceptions would certainly be a way to go, which would be a bit "cleaner" than adding other standards beyond that.
But I could also envision potentially some additional level of moderation - it gets trickier to define it in a way that avoids "bias," so it might not be workable, but if it is possible, I think it would have to be based on whether there is any "significant portion" of the population in opposition.
Admittedly, even a consensus-based standard for moderation wouldn't allow moderating of things like ordinary pornography...since of course, a significant portion of the population supports that...
Go on and try to make a legal definition for "ideology". I dare you.
Much of the whole problem started with one word: objectionable.
Well for a definition "non-political" or some such in this context, how about something like "shared by the entire population spanning the full spectrum of political viewpoints, such that no significant portion disagrees..."
Your standard is based on how popular the opinion expressed is?
Is this not the standard now? Enforcement of rules based on what one group believes is correct based on popularity within their in-group? I am uncertain about regulation, but the conversation is interesting.
No; the standard now is that each ICS decides for itself what content it wants to host.
have any such exceptions apply only to things that essentially EVERYONE* is against
First Amendment doctrine has taught us there ain't no such thing. We protect speech that doesn't deserve protection because there is no consensus on where to draw the line. See for example, the Nazis marching in Skokie.
That is I think the original intention of 230. It contains a list of things that can be moderated, opinion is not on that list. As someone else said 230 could be interpreted in that way similar to the recent decisions on the CDC Eviction Ban where a list of actions provided the courts a guide on the limits of the power.
I'm also amazed that people are not aware of the wacko left's conspiracies or the anti-black conspiracies that populate places like twitter.
You could interpret (c)(2) to be limited things which are obscene, lewd, etc., but nonetheless 230 as enacted gives broad liability protection in (c)(1) irrespective of (c)(2).
Does President George W. Bush's 92% approval rating in late 2001 count as close enough to unanimous to ban criticism?
Then I thought they were coming for people like me, so I wrote an article.
Perhaps the Left should consider saying something like, "hey, glad more folks on the Right finally see a role for government in protecting the vulnerable; now let's agree to make/keep it mutual for all..."
Ditto for nondiscrimination of conservatives in employment and public accommodation; tolerance for those with traditional/religious views who don't impose them on anyone (e.g., Chik-fil-A); etc., etc.
Nobody should want to replace one sort of censorship with another, or one sort of bigotry with another, etc.
The "shoe is now on the other foot, ha-ha" lines of argument from the Left may be sort of fair, as an initial reaction, but then after enjoying your "I told you so" moment...you need to actually consider the merits and be part of a stable long-term solution...
Lol, it only took the awful persecution of a billionaire President for the Right to suddenly recognize the plight of the 'vulnerable!'
It's totally non-serious. Trump campaigned on getting people fired and excluded for their speech (Kap), and conservatives *loved it* when the guy who criticized Chik-fil-a got canned. There's ZERO principles here. That's why for years and years Volokh just 'didn't see' any merit to the critics of corporate power, but once it involved people he was sympathetic to he ran and wrote a motivated reasoning article. It's not about any principles with the Right, it's about protecting people like them, period.
I don't know how sympathetic Volokh was to Trump, or whether he agreed with Trump's various "cancellation" efforts...
But I wasn't claiming Trump or his supporters are principled. I was simply suggesting what a productive response from others might be...
I doubt anything productive can come from such tribalism.
I do appreciate your thoughtful responses though. kudos.
BTW-I'm not 'left.' I live in Maryland and vote and volunteer for Hogan. I want lower taxes, less welfare state, am opposed to affirmative action (it just sets people against each other), think most environmental measures are self-kneecapping, etc. I just find most of the post-Trump thinking on the Right to be appalling.
"I may not be a liberal but I'm more liberal than you"
Heard it many times, that's how we got Romney and then Trump.
Congress created the setting in which media companies could become monstrously large, and they can unset that if they choose. Perhaps there's a self-interested reason why these media companies have chosen a particular political side.
"Social media immunity for libel (and other torts)" doesn't exist.
If Twitter or Facebook libel you or commit some other tort against you, they are and always have been liable.
All Section 230 does is notice that Twitter and Facebook aren't the publishers or authors of, and therefore aren't liable for any torts associated with, user-created content.
Congress could equally easily "notice" that those companies choose to provide the megaphones that those speakers use.
Prof. Volokh:
Let's say your basic point is valid: that they can force companies to choose between common carrier/immunity or moderation/liability. (I think it probably is correct as a legal matter, regardless of the policy wisdom.)
But would it be permissible for there to be an intermediate choice, which is the one most current GOP critics of § 230 seem to think is or should be the law: to moderate the icky stuff w/o liability but to be liable if they moderate "too much"?
(n.b. "Icky stuff" is a technical term for non-obscene porn, advocacy of illegal activity that fits within the Brandenburg safe harbor, spam, non-actionable harassment, etc.)
IOW, can the government say, "You can be liable if you delete some protected speech, but not other protected speech"? Would that run into an RAV problem? Or a Town of Gilbert v. Reed one?
"But would it be permissible for there to be an intermediate choice, which is the one most current GOP critics of § 230 seem to think is or should be the law: to moderate the icky stuff w/o liability but to be liable if they moderate “too much”? "
Seems to me that if EV's analysis is correct that an intermediate choice would be constitutional (or even no choice - all social media sites are common carriers or they all are distributors).
An important distinction is that under the distributor model the liability would be for things that they allow to be posted, and not for what they disallow. i.e. they could be sued for what they post, but not for what they ban. So, this does not seem to address the main complaint, and I'm skeptical that it's possible to craft viewpoint-neutral legislation that does.
I think Eugene argued the regulations on moderation need only be viewpoint neutral, and not content neutral. I'm not sure if that standard permits regulations which permit censorship of only icky stuff.
I read him as saying something different: that if the government wanted to expressly regulate moderation ("Websites are forbidden from deleting content,"), the regulations themselves must be viewpoint neutral, not content neutral. I'm asking whether the government can condition liability on the website being content neutral.
He only proposed that government condition liability on the website being viewpoint neutral.
This sort of conditional immunity might also apply to platforms' recommendation function and conversation functions.
Is that the case even for sites that have coherent speech products such as the Volokh Conspiracy? Does the First Amendment allow Congress to condition the Conspiracy's liability protection for what is said in comments on the Conspiracy managing comments in a viewpoint neutral fashion?
A social media site where anyone and everyone is able to post anything they want with the host having no ability to "curate" will devolve into trolling, flame wars, spam, shitposts, and generally be useless. Anyone remember usenet? Or did you read Tim Miller's recent piece about gettr?
Now, if the social media sites choose to be a "distributor" they are free to pick and choose what to distribute, how does that address the main complaint I'm hearing from the right? i.e. that they are CENSORING conservative viewpoints. They are still free to refuse to carry anything they don't want to; is the motivation here that they can be sued for what they do carry as some kind of revenge?
Furthermore, sites that use predictive algorithms to create a custom experience for each user can just limit the reach of disfavored material. "We didn't censor anything. We didn't delete your post. We just decided that none of our users wanted to see your crap, so we didn't show it to anyone."
This post shows that all policies have problems.
The current policy (broad liability protection and the ability to freely censor) has resulted in mega-providers that don't censor enough crap (the complaint on the left) and censor viewpoints they don't like (the complain on the right).
Common carrier status (broad liability protection with very limited censorship) could result in the death of social media in a see of trolls.
Publisher/Distributor status (liability exposure and the ability to freely censor) could result in the death of social media because of the prohibitive cost of policing all the crap you could be sued for.
"could result in the death of social media in a see of trolls."
And the world would be better off.
Not a lawyer, but here's my $0.02:
Apply category based on monthly active users or market cap.
The bigger you are, the more you are obligated to behave as a common carrier.
Let the small niche sites operate as they do today. If I can't sing the praises of because it's against the rules, it's not a big deal.
However, the Facebooks and Googles of the world would become common carriers.
The key to their continued success would be changing their platforms to build tools that empower individual users to set their own keyword/user/etc blacklists and whitelists. The biggest platforms couldn't silence people directly, but they could allow their users to mute/block things they don't want to see so the users' feeds aren't overrun with junk.
I would appreciate any legal critiques of if/if not such an approach would be legal.
Common carriers tended to have more physical interaction with customers than social media companies. If you made harassing calls in the landline days the phone company could tell the police where to find the phone.
I wonder how Prof. Volokh's ideological colleagues at the Chamber of Commerce (for starters) are viewing a proposal that liability limitations be granted by gradations determined by compliance with certain legislative preferences.
The party that seems destined to be more popular among the American electorate for the next 20 or 30 years could probably have some fun with that one.
Would someone be kind enough to suggest a libertarian blog that might address these issues?
Well you finally acknowledged the GOP will be more popular, especially after the Biden disaster.
Our electorate improves each day:
Less rural
Less religious
More diverse (less White)
Less backward
Less bigoted
Each day, older conservatives take their stale, ugly thinking to the grave and are replaced in our electorate by younger, better, more diverse Americans. This is the natural course and the American way.
Maintaining a viable national electoral coalition for Republican candidates -- and a platform of insularity, intolerance, ignorance, and superstition -- is becoming increasingly difficult. How often has a Republican presidential candidate won the popular vote in recent decades?
The liberal-libertarian mainstream has been shaping American progress for at least 50 years. This seems destined to continue throughout the foreseeable future. There just aren't enough clingers left to enable the current Republican Party to avoid electoral ruin, particularly in the advanced, educated, skilled, successful states and communities. The American culture war is not over but it has been settled.
Carry on, clingers. As best you can manage under these circumstances.
Has anyone done a study of voting regulations in all 50 state that is available online somewhere?
I've been watching the overheated rhetoric on voter suppression for the last few years and most of it seems over blown. The specific provisions that are cited often seem either innocuous or of little import. Calling these changes Jim Crow 2.0 surely seems a nearly Trumpian level of exaggeration.
When I first started voting you generally had to go to the registrars office to register and prove your residence, on election day you had to go to the poll in your precinct to cast a vote unless you had an acceptable reason to vote absentee.
Over the years early voting has been implemented also generally requiring you to appear at a specific place to vote, absentee voting has been expanded and in my state at least anyone over 65 and some others can sign up for voting by mail and they send you a mail-in ballot before every election.
I watched the debate over the Georgia law and a few provisions seemed overly restrictive, one closing early voting on Sunday was particularly criticized because black churches had begun organizing "Souls to the Polls" transporting people to vote after church. That provision was dropped. Another provision that got a lot of publicity prohibited providing food or drink to people waiting in line to vote. It was often portrayed as giving water to people, but it was more than that and it easily could have been used to circumvent anti-electioneering provisions of the law designed to prevent, among other things, intimidation of voters. Imagine a bunch of white guys in tactical gear handing out water in a black precinct.
This morning as I was getting ready I saw Mitch McConnell speaking on this very topic and citing some examples of what the Texas law was going to do. I imagine his points were accurate but certainly there could have been objectionable provisions in the bill he didn't mention.
Sorry folks, much as you want to, you cannot turn publishing into not-publishing with fanciful legal labels. If the activity the company practices is publishing activity, no matter the label, what results will be publications, not common carriage, or any other nonsense you can dream up. Make Facebook a common carrier, and all you do is free it to publish with impunity libels, vaccine hoaxes, and 47 other kinds of socially destructive pure crap. What will you get in exchange for that? Public pressure on government to step in and censor the press.
Can anyone explain forthrightly why that seems wise?
Sorry SL, much as you want to, you cannot turn hosting into publishing with wishful thinking.
Shit, did Reason just publish that? Nope, it was me. Sorry to confuse you SL.
Wow. How wrong can you be? It means the Social media companies will be liable for what THEY post, not what anyone on the platform posts. As is, they're deleting true information saying it's been fact checked false- which is de facto libel against the person who posted the info. But those of us who have had it done to us cannot afford lawyers.
They also have deleted obvious humor stating it's false and dangerous information that others might follow. And Facebook has put people in Facebook jail over it. For example my post- I paraphrased it from memory before but just looked it up: "I heard alcohol and sunlight kill corona so if you see me naked and drunk in the yard I am doing medical research." Right- dangerous misinformation. Hundreds of thousands of people are going to see that and get drunk and dance naked in their yard. Uh-huh. By the way6, no appeal to such idiotic tomfoolery- they have spoken, and that's that- it's DANGEROUS MISINFORMATION! And don't you forget it peon!
Biden Whitehouse outright admits they are telling Facebook/Twitter et al what to remove. What does it take to get this 1st amendment violation recognized?
https://twitter.com/W7VOA/status/1415723828785917956
I sense you have scant future in journalism (editing) or law.
Although you could be destined for stardom at OAN, Instapundit, Gateway Pundit, Stormfront, Newsmax, Fox, RedState, FreeRepublic . . .
Ah, just what I'd expect from a narcissistic peacock. Attacking the messenger instead of the message. It will be interesting to see your cognitive dissonance once the ACLU decries the Biden Administration for it's censorship.
https://twitter.com/ggreenwald/status/1415789191074652164
So now that the Biden administration has enlisted the social media platforms as agents of the state (https://legalinsurrection.com/2021/07/psaki-admits-biden-admin-colluding-with-big-tech-to-flag-disinformation/), how would that affect this discussion?
The idea that section 230is what allowed those platforms to become big doesn't have any factual support. Take copyright liability, which is not covered by section 230 and is subject to a notice and takedown procedure. Did it prevent YouTube from becoming dominant? On the contrary, YouTube colluded with the record companies to create a stricter mechanism that the DMCA procedures and this helped them cement their position, in part because smaller sites became exposed to comparatively more liability, DMCA notwithstanding.
In the last few days there have been several reports that state governments have been directing social media to eliminate or censor content. And now- reports that the CDC is telling social media companies to take down any "misinformation" about covid, meaning anything that disagrees with the CDC.
And Twitter and Facebook are doing so.
They're common carriers. Doing government bidding. That's dangerous. Throughout history, people calling for censorship have never been the good guys. And the government, in cahoots with the social media companies, is doing so.
They're not common carriers, and they're not doing the government's bidding. There have been no reports that state governments have been "directing" social media to do anything or that the CDC is "telling" social media companies to do anything.
Yes they do, they let intelligence agencies and governments call out things for censorship and write the approved messaging for them to post on their platforms.
Intelligence agencies and governments and random commenters on the Volokh Conspiracy can all "call out things for censorship." I don't think it's a good look for the government to be doing that, but unless there's compulsion involved it's just speech.