Rand Paul Turns Against Section 230, Citing YouTube Video Accusing Him of Taking Money From Maduro
After Google refused to take down a video of him, the Kentucky senator suggested upending the legal framework undergirding the internet for three decades.
Sen. Rand Paul (R–Ky.) has long been one of the few refreshing voices out of Washington, D.C., when it comes to free speech, including free speech on social media and elsewhere in the digital realm. He was one of just two senators to vote against FOSTA, the law that started the trend of trying to carve out Section 230 exceptions for every bad thing.
As readers of this newsletter know, Section 230 has been fundamental to the development and flourishing of free speech online.
Now, Paul has changed his mind about it. "I will pursue legislation toward" ending Section 230's protections for tech companies, the Kentucky Republican wrote in the New York Post this week.
You are reading Sex & Tech, from Elizabeth Nolan Brown. Get more of Elizabeth's sex, tech, bodily autonomy, law, and online culture coverage.
A Section 230 Refresher
For those who need a refresher (if not, skip to the next section): Section 230 of the Communications Act protects tech companies and their users from frivolous lawsuits and spurious charges. It says: "No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider." If someone else is speaking (or posting), they—not you or Instagram or Reddit or YouTube or any other entity—are legally liable for that speech.
Politicians, state attorneys general, and people looking to make money off tech companies that they blame for their troubles hate Section 230. It stops the latter—including all sorts of ambulance-chasing lawyers—from getting big payouts from tech companies over speech for which these companies merely served as an unwitting conduit. It stops attorneys general from making good on big, splashy lawsuits framed around fighting the latest moral panic. And it prevents politicians from being more in control of what we all can say online.
If a politician doesn't like something that someone has posted about them on the internet, doesn't like their Google search results, or resents the fact that people can speak freely—and sometimes falsely—about political issues, it would be a lot easier to censor whatever it is that's irking them in a world without Section 230. They could simply go to a tech platform hosting that speech and threaten a lawsuit if it was not removed.
Tech platforms might very well win many such lawsuits on First Amendment grounds, if they had the resources to fight them and chose that route. But it would be a lot easier, in many cases, for them to simply give in and do politicians' bidding, rather than fight a protracted lawsuit. Section 230 gives them the impetus to resist and ensures that any suits that go forward will likely be over quickly, in their favor.
But here's the key: Section 230 does not stop authorities from punishing companies for violations of federal law, and it does not stop anyone from going after the speakers of any illegal content. If someone posts a true threat on Facebook, they can still be hauled in for questioning about it. If someone uses Google ads to commit fraud, they're not magically exempted from punishment for that fraud. And if someone posts a defamatory rant about you on X, you can still sue them for that rant.
Enter Rand Paul
Paul is understandably upset about a video about him that has been posted to YouTube.
The video "is a calculated lie, falsely accusing me of taking money from Venezuela's Nicolás Maduro," wrote Paul in the Post. "It is, of course, a ludicrous accusation, but paid trolls are daily spreading this lie across the internet. This untruth is essentially an accusation of treason, which then leads the internet mob to call for my death."
In short, it is "a provably false defamatory video," according to Paul.
Defamation is a crime. And Paul is not without options for addressing it.
For one, he can use his own speech—as he is doing—to counter the false information. Paul has his own channels of communication, huge audiences on social media, and relatively easy access to mainstream media outlets, like the Post. He is not without options for correcting the record here.
He could also threaten to sue the creators(s) of the video. Sometimes, the threat of legal action is enough to get results—and in fact, that's what happened here.
"The individual who posted the video finally took down the video under threat of legal penalty," per Paul's Post op-ed.
If the mere threat hadn't worked, Paul could have actually sued the creator(s) of the video. If he successfully proved the video was defamatory, a court would order the creator to remove it.
Imagine the Alternatives
If assistance from Google—YouTube's parent company—is needed to comply with a court order of removal, the company is "prepared to comply," per its webpage on defamation and YouTube policies.
But Google does not simply take down content because someone tells them it's defamatory. Can you imagine the abuse of process that would encourage? No one could say anything negative, partisan, or controversial at all without someone alleging it was defamatory and company policy requiring the content to come down.
Nor is Google "in a position to adjudicate the truthfulness of postings," either. It is not a court of law, and it should not be asked to act as one. It would be unfair to content creators and distributors, or to anyone alleging defamation, for one thing. It would also be unfair to Google, or any other tech company in the content facilitation business.
If tech companies were expected to adjudicate defamation claims themselves, they would have to employ a massive army of staffers for said purposes—which would, at the very least, be a big drain on their resources. Smaller companies, unable to afford it, would likely just take down most of what was flagged to them. Meanwhile, at bigger places, corporate culture would surely necessitate an overly cautious approach, leading to all sorts of content being taken down despite being true and/or legal.
Companies are burdened. Creators are burned (and it seems fair to assume repercussions from the platforms might extend beyond simply removing one video). Way more legal content winds up suppressed than defamatory content gets caught.
Some might suggest that the process wouldn't work for take-down requests generally—but surely a politician's or public figure's claims of defamation should carry extra weight?
But politicians and public figures have the most to lose from content that is not defamatory but does portray them or their associates in a negative light.
I'm not suggesting Paul is lying here, but I am certain that some politicians and public figures would lie. And giving them carte blanche to demand take-downs of "defamatory" content would create a de facto censorship process.
This would especially harm people with unpopular views or those trying to expose abuses of power.
Libel Isn't An Easy Call
Some people imagine that it's easy to determine whether speech about a public figure is libelous or slanderous. It goes like this: Is it true or not? The end.
But being false doesn't automatically make a statement defamatory.
The false information must also harm the reputation of the subject. The speaker must have at least acted "negligently" in spreading it. And when it comes to information about a public official or other public figure, there must also be "actual malice" involved.
Someone who repeats false information about a public figure without doing so maliciously is not guilty of defamation.
That's one of the reasons we can't expect tech companies to adjudicate defamation allegations fairly. It's hard enough for courts to determine whether someone spread false information in good faith, thinking it was true, or spread it with the intent to cause damage. How are tech companies—with no power to compel testimony or anything like that—supposed to make that call?
There are good reasons for this malice plank. Without it, it would be very hard for journalists or anyone else to report on the news, discuss political developments, and so on. Because, inevitably, doing so means getting the story wrong some of the time. Responsible journalists and content creators remove false content when they learn the story has changed and issue corrections. This allows both free speech and the truth to flourish. Punishing people for accidentally saying or publishing anything incorrect about public figures would not.
Besides, not all statements are so easy to categorize. Many are matters of opinion more than fact. Some rely on hyperbole. Some are parody. Again, it doesn't seem quite within a tech company's wheelhouse to parse the finer parts of speech in this way. Again, an expectation that digital platforms simply remove defamatory content when asked would wind up with a lot of First Amendment–protected speech being squelched.
What Changed?
I can understand why Paul is frustrated about what happened with this video and the fact that he couldn't get it taken down quickly. He writes of it resulting in "threats on [his] life" and condemns "the arrogance of Google to continue hosting this defamatory video."
Arrogance? Paul usually seems to realize the enormous amount of discretion that goes into determining whether speech is legal or not, the difficulties that this presents for tech companies, and the reasons it's better to err on the side of allowing more speech, not less. His past statements suggest he understands the purpose of Section 230 and the reasons it is important.
"I always believed this protection is necessary for the functioning of the internet," he wrote in the Post. "The courts have largely ruled that Section 230 shields social-media companies from being sued for content created by third parties. If someone calls you a creep on the internet, you can call them a creep right back, but you can't sue the social-media site for hosting that insult."
This suggests Paul knew the way this played out for defamation claims. If someone defames you on the internet, you can push back on the claim or you can sue the speaker of it, but you can't sue the social media site for hosting that speaker.
What changed? Someone posted something particularly inflammatory about Paul, something Paul calls defamation against him. Suddenly, the normal rules governing such interactions shouldn't apply?
Someone posted false information about Paul and then removed that information without Google doing anything. To me, that does not suggest we need to upend the legal framework that's been undergirding the internet for three decades. In fact, some might even say that it's evidence the current system works—or that it is, in any event, better than alternatives.
Right Diagnosis, Wrong Cure
It's easy, in discussions like these, to lose sight of the larger picture. But let's keep in mind that what Paul is suggesting—revising or removing Section 230 protections when it comes to defamation—isn't just going to affect the Rand Pauls and Googles of the world.
It also isn't likely to stop people from spreading false information about public figures.
After all, the internet is expansive and global. Even if Paul could successfully sue Google over YouTube's hosting of this video, who's to say the creator wouldn't simply post it elsewhere? Perhaps many places. Perhaps on a platform hosted overseas.
Even if the creator stopped spreading it, or Google stopped hosting it, would its removal from YouTube really stop any false information from spreading? It's already out there. People would still be talking about it on X, on Facebook, etc. Would Paul sue all of those people and those platforms, too? And would that even work? Presumably, many of the people re-sharing this information believe it is true and are spreading it maliciously.
So, we don't actually stop the information from spreading. Meanwhile, however, tech companies—especially smaller ones—start freaking out. They start pulling down more and more content. It becomes difficult for all but the most mainstream media outlets and voices to report on politicians or share news about public figures, even when their information is truthful. If becomes difficult for all but the biggest tech companies to risk allowing this sort of speech from independent content creators. And it becomes easier for powerful people to control and shape the public discourse.
Paul spends a lot of his Post op-ed critiquing the way Google handled content during the COVID-19 pandemic. And he's right: All too often, tech companies—including but not limited to Google—suppressed statements that countered conventional wisdom about the pandemic, even if these were merely matters of opinion or unsettled scientific claims that would later turn out to be right. They did this while simultaneously receiving pressure from the Biden administration to clamp down on COVID misinformation. And that's not the only time we've seen tech platforms act under the influence of politicians or powerful activists, or apply their rules in what seem like selective ways.
But Paul is wrong that removing Section 230 would somehow remedy this situation. Without Section 230, heavy-handed content moderation policies and political "jawboning" would get so much worse.
More Sex & Tech News
OpenAI launches age prediction for ChatGPT:
We're rolling out age prediction on ChatGPT to help determine when an account likely belongs to someone under 18, so we can apply the right experience and safeguards for teens.
Adults who are incorrectly placed in the teen experience can confirm their age in Settings > Account.…
— OpenAI (@OpenAI) January 20, 2026
FTC appeals Meta monopoly ruling: The Federal Trade Commission has decided to further drag out its doomed case against Facebook and Instagram parent company Meta. The agency announced on Tuesday that it is appealing a November 2025 federal court finding in Meta's favor in an antitrust case that a federal court had already dismissed once.
Brothel doctors meet historical fiction: The Double Standard Sporting House "centers on a pioneering female doctor treating women, primarily sex workers, in the maelstrom of post-Civil War New York City," per a review in The Arts Fuse. It sounds interesting, if you can forgive a sex-trafficking subplot that seems a bit like "white slavery" panic.
AI for good: Doctors are using AI to develop treatments for rare diseases like the one—DeSanto-Shinawi syndrome—that plagued baby Jorie, with some focusing on babies in intensive care and pediatric patients.
British dad "horrified" by politicians capitalizing on daughter's death to push social media regulation: "The father of a teenager who took her own life after viewing suicide and self-harm content online has said banning under-16s from social media would be wrong," reports the BBC. "Ian Russell, the father of Molly Russell, told BBC's Newscast that the government should enforce existing laws rather than 'implementing sledgehammer techniques like bans.'"
Some evidence suggests that "Gen-Z college interns and recent graduates are the first workers being affected by AI," and this "is surprising," writes Jeffrey Selingo at New York magazine. "Historically, major technological shifts favored junior employees because they tend to make less money and be more skilled and enthusiastic in embracing new tools." This time, however:
… a study from Stanford's Digital Economy Lab in August showed something quite different. Employment for Gen-Z college graduates in AI-affected jobs, such as software development and customer support, has fallen by 16 percent since late 2022. Meanwhile, more experienced workers in the same occupations aren't feeling the same impact (at least not yet), said Erik Brynjolfsson, an economist who led the study. Why the difference? Senior workers, he told me, "learn tricks of the trade that maybe never get written down," which allow them to better compete with AI than those new to a field who lack such "tacit knowledge." For instance, that practical know-how might allow senior workers to better understand when AI is hallucinating, wrong, or simply not useful.
Selingo goes on to look at how colleges and universities are adapting, presenting a not-totally-pessimistic picture.
"Are You Dead?" A popular app in China will let your friends know that you're not dead.
Editor's Note: As of February 29, 2024, commenting privileges on reason.com posts are limited to Reason Plus subscribers. Past commenters are grandfathered in for a temporary period. Subscribe here to preserve your ability to comment. Your Reason Plus subscription also gives you an ad-free version of reason.com, along with full access to the digital edition and archives of Reason magazine. We request that comments be civil and on-topic. We do not moderate or assume any responsibility for comments, which are owned by the readers who post them. Comments do not represent the views of reason.com or Reason Foundation. We reserve the right to delete any comment and ban commenters for any reason at any time. Comments may only be edited within 5 minutes of posting. Report abuses.
Please to post comments
Willie Nelson "I woke up still not dead again today"
https://www.youtube.com/watch?v=J34esa_aJxc
phish "Waking Up Dead"
https://www.youtube.com/watch?v=03yho-gRKvE&list=RD03yho-gRKvE&start_radio=1
Bring out your dead. Monty Python
https://youtu.be/zEmfsmasjVA?si=AdgRbCKioTaTBrEi
§230 is only "necessary" because the government makes it impossible for customers to sue social media companies for violating their own terms of service.
There's a concept called something like "fit for purpose" or "implied merchantability" or ... something. IANAL. But if a company advertises a product as doing something it doesn't do, like milk fortified with vitamins, that's fraud.
So it is with social media companies who advertise themselves as the way to stay connected with family, friends, and customers. Their terms of service are so broad and nebulous that no one really knows what they mean, other than "we can fuck you over and you can't touch us." And they are right, because the government makes it hideously slow and expensive to sue the bastards when they close an account without warning. Businesses can't afford to spend millions of dollars and years of legal quibbling to contest the closing.
So the government graciously allows §230 to exist so the plebs think they have a way to force companies to behave. Except it doesn't work any better, and it doesn't let people force social media companies to obey their own terms of service.
Fuck §230.
impossible for customers to sue social media companies for violating their own terms of service.
Hard maybe but definitely impossible. Alex Berenson did just that. Problem is alot of high profile lawsuits - even Alex's in one element - like Praeger U relied on the 1st A.
the legal framework undergirding the internet for three decades.
el oh el.
Edit: If you'd have told me when I read this article that the same publication would call this law the legal framework "undergirding the internet" I'd have thought you were taking crazy pills.
upending the legal framework undergirding the internet for three decades
Upending three decades of undergirding legal framework? Why, that's almost twice as long as we've known that female rapists with congenital penises were, by law, real women!
> "Are You Dead?" A popular app in China will let your friends know that you're not dead.
How many accounts is the CCP operating?
Let me check with the DNC.
Friends =/= people you know.
If you need an app to tell people you know that you aren't dead...
Well of course YouTube had no problem censoring content for the Biden administration and the Covidians. As Reason explained it's all good because they're a private company and fuck you that's why. That censorship arguably led to old people being summarily executed with tubes jammed down their throats among other crimes against humanity. But that was the old YouTube. Now they are champions of free speech and are too principled to censor obviously false libelous bullshit. I don't know the libertarian answer to this conundrum. What I do know is that Reason has zero credibility on the subject.
Arrogance? Paul usually seems to realize the enormous amount of discretion that goes into determining whether speech is legal or not, the difficulties that this presents for tech companies, and the reasons it's better to err on the side of allowing more speech, not less.
"Err on the side of allowing more speech" you say?
*thinks*
*ctrl-f COVID 1/2*
If I stretch my mind back to ancient times and "the twitter files" the 'erring' was always 'less speech', government-demanded censorship-- which you tards always screeched 'SEXSHUN TOOO THIRTEE' in nearly every instance when it was pointed out.
It's funny how the 'erring' always aligns with one ideology and one ideology only. "erring" for more censorship when it's
citizen journalistsconservative youtubers, and "erring" for less censorship when it's not.Ambulance chasers are lawyers who follow after people injured in ambulances in hopes of a payout after an injury that may or may not have been someone's fault. Implicitly, they're ambulance chasers rather than respectable defenders of the law because they aren't very good at determining fault.
Borrowing a turn of phrase, Section 230 defenders are good at one thing; watching the government break your legs, offer you a crutch, and saying, "See? Aren't you glad they defended you from the ambulance chasers?"
>But Google does not simply take down content because someone tells them it's defamatory. Can you imagine the abuse of process that would encourage? No one could say anything negative, partisan, or controversial at all without someone alleging it was defamatory and company policy requiring the content to come down.
Except they extend extra 'courtesies' to those on the Progressive Left.
This is a main reason why I don't trust politicians. I only trust the Constitution because once the ink was dry it was set in stone and not easily "disappeared." If we could magically "disappear" all of the unconstitutional acts of Congress; and all of the unconstitutional executive orders; and all of the unconstitutional legislation from the bench; then ninety percent of the Code of Federal Regulations and Supreme Court rulings would disappear and liberty would prevail again, at least for a while. Rand was ineffective at promoting liberty before his defection and will remain ineffective after.
You seem to only believe in a constitution as interpreted by you based on your posting history, often in defiance of precedence and common understanding. Basically forcing everyone to abide by yours and only your interpretation. Seems pretty authoritarian bypassing that whole republic and courts part of it.
So we can list you in the "Living Document" column along with the progressivist socialists now? Jeesh ...
What defection?
For those who need a refresher (if not, skip to the next section):
Ctrl+f 'cubb': 0 results
Ctrl+f 'prodi': 0 results
For those who need a refresher of what S230 is really about, read up on Cubby v. Compuserve and Prodigy v. Stratton Oakmont. Then read about how Reps. Ron Wyden and Chris Cox passed the CDA specifically to reverse both decisions. The read how SCOTUS, in 1997, the same year the CDA passed, struck down all of the CDA except S230, in ACLU v. Reno.
Then, decide for yourself whether free speech on the internet existed or look around and see if S230 thwarted the trolls and ambulance chasers.
TL,DR; FOSTA carving out parts of S230 for every bad thing is just the layer below S230, which was carved out of the CDA, a law against every bad thing, which itself was a carve out of the 1A. Which, the 1A says Congress shouldn't be making any of these laws one way or the other to begin with.
Section 230 is a 'loophole' that allows these companies to have their cake and eat it too, and it's not the first amendment of the internet.
The first amendment is.
Why is it different when Google lets someone say statement A on their platform that's incendiary, and then shuts down someone else saying statement B that's incendiary? Why, it's because they structured their TOS to allow subset A and disallow subset B!
Is that not editorializing? Do they not ban users or block content that they consider against their TOS? Last I checked, they certainly do. Yet they are still 'protected' by section 230 despite that viewpoint consideration.
Paul is wrong here, I think, since he got the video taken down by 'normal' means but it doesn't change the fact that section 230 is absurd either.
I don't get how Wikipedia is not specifically responsible for EVERYTHING on their website, given that every article is written by "Wikipedia" officially. They should be liable for every single word.
As you say though, the first amendment. Its supposed to prevent the govt from interfering with speech AND the press. Google is the press, youtube is their paper, videos are people speech. Whether they want to editorialize or not, what people want to post or not, is supposed to be free from govt.
And any interference is ultimately impossible. They best they can do is disincentivize through punishment. To be effective at that would end up with tyranny. So, the cure here is stay out of it. If people want to shitpost and people want to beleive shit post and Paul loses an election because of it, too bad.
230 is quite simple, actually, despite your lame efforts to spin complexity into it. It says that the bulletin board owner cannot be held liable for the content of the pieces of paper community members tack up on that bulletin board. If someone posts lies about you on that bulletin board, you have to find out who they are, prove in court that they knew or should have known that they were lies, and that they intentionally lied about you to damage you. Rand Paul should have to do that if he doesn't like being lied about. The owner of the bulletin board should not be dragged through a knothole just because they are an easy target with deep pockets. Shame on you!
"Ian Russell, the father of Molly Russell, told BBC's Newscast that the government should enforce existing laws rather than 'implementing sledgehammer techniques like bans.'"
Like this one?
Defamation is a crime. And Paul is not without options for addressing it.
ENB, did an intern steal your byline? You should know better than that. No one goes to jail/prision (yet) for saying bad things about a senator, "provably false" or not. Possibly get sued, but it's never a crime.
"Crime, civil tort, what is the difference, really?"
This is the part where you find the poster (i.e the actual villain) to go after instead of trying to use a 3rd-Party median as your "easy" button. Which is precisely what Section 230 is about.
Sorry. I'm going to have to go with Reason on this one. Not even Rand Paul as much I like him gets the "easy" button of controlled media.
(i.e the actual villain)
TL,DR; They don't have membership cards or elect a board, but they do have public web handles and UUIDs conveniently provided to them by an organization that may or may not be aware of their activities.
Which is precisely what Section 230 is about.
False. Not even the notorious "McDonalds coffee case" abides this as a precept. Specifically because said, personal McDonalds employee rightly uses "I was abiding the terms of the contract." as defense.
Suit and discovery should *start* with the personal creator of the statement, but nowhere else in tort or criminal law do we explicitly say, "Only one person can be responsible and if other parties don't want to name that person or otherwise comply, too bad." up front.
FFS, Rupert Stadler was charged for fraud, false certification, and criminal advertising associated with the defeat devices dodging the Clean Air Act, the idea that private citizens or even AGs suing tech companies up to and including CEOs would be catastrophically disruptive to anything is both uniquely and patently retarded. Even at that, rein in the EPA and hem in the AGs, not immunize preferred/compliant manufacturers (who, in the case of S230, happen to be infringing on peoples' free speech on behalf of the government).
Do I think that Google is complicit in the defamation of Rand Paul? Absolutely not. Do I think that means that no computer service provider is in any way complicit in similar crimes or civil infractions to the point that they need up front immunity? The same absolutely not.
Remington settled with the Sandy Hook families for $17M. When Alex Jones said they were crisis actors playing on their social capitol for political and financial gain, they sued him for $1.5B.
The idea that litigious trolls will topple Meta or Alphabet is fear mongering Marxist exploitation of the American Rule run amok. Stealth Arms doesn't have $17M to rely on settling if one of their guns gets used in a school shooting. They rely on the fact that, compared to the Bushmaster, their market penetration is relatively small and that, ideally, a judge can see that a $5M arms manufacturing company who (presumably) wasn't directly involved in the shooting couldn't have done $17M worth of damage.
Indeed. Gun manufacturers also deserve a Section 230.
The blame-shifting going on these days is absurd.
PITY about Paul. He took a BIG hit here with me (a former fan).
Robby Starbuck has been fighting Google for 3 years trying to get them to remove the false information that Gemni provided when someone searched his name. The search engine provided fake info on Starbuck accusing him of rape, shoplifting, assault and other crimes while linking to news reports that do not exist.
Gemni told him it was trying to delete the fake info but its creators wouldn't let him. It also said it was deliberately creating fake news about Starbuck and other conservatives because of their political stance. It was apparently programmed to create those stories by someone at the company.
After numerous requests from Starbuck and his attorney to remove the fake info, and a number of promises that they would do so, he finally sued. I think Google finally took action after the lawsuit was filed.
I supposed it's harder to stand on moral ground when it is about you, specifically. Paul could have done us all a favor, but alas ...
See mad.casual's comment above, I think he summed it up best. The royal "we" (in this case, Reason and Mike Masnick) keeps suggesting Section 230 is the moral framework of the internet, when it isn't even close. It's a carve-out of a carve-out of a carve-out of... a carve-out of the 1st amendment. The 1st Amendment is the "moral master-key" in all of this, not the 12th order carve-out of that 1st amendment.
Can you imagine the abuse of process that would encourage? No one could say anything negative, partisan, or controversial at all without someone alleging it was defamatory and company policy requiring the content to come down.
Holy shit! Uh, wow you realize that, in the olden days, journalists could research and independently verify facts and print them without fear of defamation, right?
I'd suggest, as a person who, auspiciously, journals or reports things, you might want to look into this. Even if your editors and peers don't think it's an important part of your job or that it's OK as long as it's not illegal, customers everywhere don't like being lied to.
FFS, web technology even facilitates it. You could even still hedge on clickbaiting and outrage farming by setting it up a hybrid front-to-back moderation system so that any/all videos that get a takedown request are automatically blocked from being shared and come down automatically within 72 hours unless actively reversed by someone within the company.
It's like you're specifically *trying* as hard as you can to lie to people.
Postscript: Wait, you mean to tell me that without backend moderation on content platforms, Google would have to go back to being *just* the world's leading internet search engine and advertising platform? And Amazon would have to go back to... oh, wait, Amazon (save an errant phone call from Joe Lieberman) doesn't in any way moderate content on their platforms a la Compuserve and is actually just a computer service provider.
Won't somebody think of the poor social media companies?!?
I don’t think she, or Reason writ large, do realize that.
They also seem to have trouble with “good faith” and 1st Amendment.
The problem with Section 230 has never been Section 230 itself.
It's that Section C got wildly misinterpreted by the courts.
"(c) Protection for "Good Samaritan" Blocking And Screening Of Offensive Material.—
(1) Treatment of Publisher or Speaker.—No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.
(2) Civil Liability.—No provider or user of an interactive computer service shall be held liable on account of—
(A) any action voluntarily taken in good faith to restrict access to or availability of material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable, whether or not such material is constitutionally protected; or
(B) any action taken to enable or make available to information content providers or others the technical means to restrict access to material described in [subparagraph (A)"
The courts interpreted "or otherwise objectionable", not to mean things of the same nature as the rest of that list, but instead as just anything the platform objects to for any reason whatsoever, rendering the rest of the list totally redundant.
Prior to Section 230, precedent was that, so long as a platform acted as a passive conduit, content providers were responsible for their own speech, but that as soon as the platform started picking and choosing what to carry, IT became responsible for those choices.
Section 230 aimed to modify that precedent by providing a LIMITED safe harbor for moderation that removed the sort of content that there was a consensus was objectionable. Any moderation beyond that was expected to happen through third party filters selected by the users, which the platform was supposed to provide access to.
Instead, what happened is that the courts handed the platforms immunity for content together with full editorial control, and then the platforms typically went out of their way to DEFEAT third party filters. (Because they enabled people to filter out content the platform was pushing.)
Really, the only change needed here is to clarify that "or otherwise objectionable" is subject to the legal doctrine of "Ejusdem Generis", can only be the same sort of thing as the rest of the list. And then enforce the requirement that platforms enable, rather than blocking, use of third party filters.