Rand Paul Turns Against Section 230, Citing YouTube Video Accusing Him of Taking Money From Maduro
After Google refused to take down a video of him, the Kentucky senator suggested upending the legal framework undergirding the internet for three decades.
Sen. Rand Paul (R–Ky.) has long been one of the few refreshing voices out of Washington, D.C., when it comes to free speech, including free speech on social media and elsewhere in the digital realm. He was one of just two senators to vote against FOSTA, the law that started the trend of trying to carve out Section 230 exceptions for every bad thing.
As readers of this newsletter know, Section 230 has been fundamental to the development and flourishing of free speech online.
Now, Paul has changed his mind about it. "I will pursue legislation toward" ending Section 230's protections for tech companies, the Kentucky Republican wrote in the New York Post this week.
You are reading Sex & Tech, from Elizabeth Nolan Brown. Get more of Elizabeth's sex, tech, bodily autonomy, law, and online culture coverage.
A Section 230 Refresher
For those who need a refresher (if not, skip to the next section): Section 230 of the Communications Act protects tech companies and their users from frivolous lawsuits and spurious charges. It says: "No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider." If someone else is speaking (or posting), they—not you or Instagram or Reddit or YouTube or any other entity—are legally liable for that speech.
Politicians, state attorneys general, and people looking to make money off tech companies that they blame for their troubles hate Section 230. It stops the latter—including all sorts of ambulance-chasing lawyers—from getting big payouts from tech companies over speech for which these companies merely served as an unwitting conduit. It stops attorneys general from making good on big, splashy lawsuits framed around fighting the latest moral panic. And it prevents politicians from being more in control of what we all can say online.
If a politician doesn't like something that someone has posted about them on the internet, doesn't like their Google search results, or resents the fact that people can speak freely—and sometimes falsely—about political issues, it would be a lot easier to censor whatever it is that's irking them in a world without Section 230. They could simply go to a tech platform hosting that speech and threaten a lawsuit if it was not removed.
Tech platforms might very well win many such lawsuits on First Amendment grounds, if they had the resources to fight them and chose that route. But it would be a lot easier, in many cases, for them to simply give in and do politicians' bidding, rather than fight a protracted lawsuit. Section 230 gives them the impetus to resist and ensures that any suits that go forward will likely be over quickly, in their favor.
But here's the key: Section 230 does not stop authorities from punishing companies for violations of federal law, and it does not stop anyone from going after the speakers of any illegal content. If someone posts a true threat on Facebook, they can still be hauled in for questioning about it. If someone uses Google ads to commit fraud, they're not magically exempted from punishment for that fraud. And if someone posts a defamatory rant about you on X, you can still sue them for that rant.
Enter Rand Paul
Paul is understandably upset about a video about him that has been posted to YouTube.
The video "is a calculated lie, falsely accusing me of taking money from Venezuela's Nicolás Maduro," wrote Paul in the Post. "It is, of course, a ludicrous accusation, but paid trolls are daily spreading this lie across the internet. This untruth is essentially an accusation of treason, which then leads the internet mob to call for my death."
In short, it is "a provably false defamatory video," according to Paul.
Defamation is a crime. And Paul is not without options for addressing it.
For one, he can use his own speech—as he is doing—to counter the false information. Paul has his own channels of communication, huge audiences on social media, and relatively easy access to mainstream media outlets, like the Post. He is not without options for correcting the record here.
He could also threaten to sue the creators(s) of the video. Sometimes, the threat of legal action is enough to get results—and in fact, that's what happened here.
"The individual who posted the video finally took down the video under threat of legal penalty," per Paul's Post op-ed.
If the mere threat hadn't worked, Paul could have actually sued the creator(s) of the video. If he successfully proved the video was defamatory, a court would order the creator to remove it.
Imagine the Alternatives
If assistance from Google—YouTube's parent company—is needed to comply with a court order of removal, the company is "prepared to comply," per its webpage on defamation and YouTube policies.
But Google does not simply take down content because someone tells them it's defamatory. Can you imagine the abuse of process that would encourage? No one could say anything negative, partisan, or controversial at all without someone alleging it was defamatory and company policy requiring the content to come down.
Nor is Google "in a position to adjudicate the truthfulness of postings," either. It is not a court of law, and it should not be asked to act as one. It would be unfair to content creators and distributors, or to anyone alleging defamation, for one thing. It would also be unfair to Google, or any other tech company in the content facilitation business.
If tech companies were expected to adjudicate defamation claims themselves, they would have to employ a massive army of staffers for said purposes—which would, at the very least, be a big drain on their resources. Smaller companies, unable to afford it, would likely just take down most of what was flagged to them. Meanwhile, at bigger places, corporate culture would surely necessitate an overly cautious approach, leading to all sorts of content being taken down despite being true and/or legal.
Companies are burdened. Creators are burned (and it seems fair to assume repercussions from the platforms might extend beyond simply removing one video). Way more legal content winds up suppressed than defamatory content gets caught.
Some might suggest that the process wouldn't work for take-down requests generally—but surely a politician's or public figure's claims of defamation should carry extra weight?
But politicians and public figures have the most to lose from content that is not defamatory but does portray them or their associates in a negative light.
I'm not suggesting Paul is lying here, but I am certain that some politicians and public figures would lie. And giving them carte blanche to demand take-downs of "defamatory" content would create a de facto censorship process.
This would especially harm people with unpopular views or those trying to expose abuses of power.
Libel Isn't An Easy Call
Some people imagine that it's easy to determine whether speech about a public figure is libelous or slanderous. It goes like this: Is it true or not? The end.
But being false doesn't automatically make a statement defamatory.
The false information must also harm the reputation of the subject. The speaker must have at least acted "negligently" in spreading it. And when it comes to information about a public official or other public figure, there must also be "actual malice" involved.
Someone who repeats false information about a public figure without doing so maliciously is not guilty of defamation.
That's one of the reasons we can't expect tech companies to adjudicate defamation allegations fairly. It's hard enough for courts to determine whether someone spread false information in good faith, thinking it was true, or spread it with the intent to cause damage. How are tech companies—with no power to compel testimony or anything like that—supposed to make that call?
There are good reasons for this malice plank. Without it, it would be very hard for journalists or anyone else to report on the news, discuss political developments, and so on. Because, inevitably, doing so means getting the story wrong some of the time. Responsible journalists and content creators remove false content when they learn the story has changed and issue corrections. This allows both free speech and the truth to flourish. Punishing people for accidentally saying or publishing anything incorrect about public figures would not.
Besides, not all statements are so easy to categorize. Many are matters of opinion more than fact. Some rely on hyperbole. Some are parody. Again, it doesn't seem quite within a tech company's wheelhouse to parse the finer parts of speech in this way. Again, an expectation that digital platforms simply remove defamatory content when asked would wind up with a lot of First Amendment–protected speech being squelched.
What Changed?
I can understand why Paul is frustrated about what happened with this video and the fact that he couldn't get it taken down quickly. He writes of it resulting in "threats on [his] life" and condemns "the arrogance of Google to continue hosting this defamatory video."
Arrogance? Paul usually seems to realize the enormous amount of discretion that goes into determining whether speech is legal or not, the difficulties that this presents for tech companies, and the reasons it's better to err on the side of allowing more speech, not less. His past statements suggest he understands the purpose of Section 230 and the reasons it is important.
"I always believed this protection is necessary for the functioning of the internet," he wrote in the Post. "The courts have largely ruled that Section 230 shields social-media companies from being sued for content created by third parties. If someone calls you a creep on the internet, you can call them a creep right back, but you can't sue the social-media site for hosting that insult."
This suggests Paul knew the way this played out for defamation claims. If someone defames you on the internet, you can push back on the claim or you can sue the speaker of it, but you can't sue the social media site for hosting that speaker.
What changed? Someone posted something particularly inflammatory about Paul, something Paul calls defamation against him. Suddenly, the normal rules governing such interactions shouldn't apply?
Someone posted false information about Paul and then removed that information without Google doing anything. To me, that does not suggest we need to upend the legal framework that's been undergirding the internet for three decades. In fact, some might even say that it's evidence the current system works—or that it is, in any event, better than alternatives.
Right Diagnosis, Wrong Cure
It's easy, in discussions like these, to lose sight of the larger picture. But let's keep in mind that what Paul is suggesting—revising or removing Section 230 protections when it comes to defamation—isn't just going to affect the Rand Pauls and Googles of the world.
It also isn't likely to stop people from spreading false information about public figures.
After all, the internet is expansive and global. Even if Paul could successfully sue Google over YouTube's hosting of this video, who's to say the creator wouldn't simply post it elsewhere? Perhaps many places. Perhaps on a platform hosted overseas.
Even if the creator stopped spreading it, or Google stopped hosting it, would its removal from YouTube really stop any false information from spreading? It's already out there. People would still be talking about it on X, on Facebook, etc. Would Paul sue all of those people and those platforms, too? And would that even work? Presumably, many of the people re-sharing this information believe it is true and are spreading it maliciously.
So, we don't actually stop the information from spreading. Meanwhile, however, tech companies—especially smaller ones—start freaking out. They start pulling down more and more content. It becomes difficult for all but the most mainstream media outlets and voices to report on politicians or share news about public figures, even when their information is truthful. If becomes difficult for all but the biggest tech companies to risk allowing this sort of speech from independent content creators. And it becomes easier for powerful people to control and shape the public discourse.
Paul spends a lot of his Post op-ed critiquing the way Google handled content during the COVID-19 pandemic. And he's right: All too often, tech companies—including but not limited to Google—suppressed statements that countered conventional wisdom about the pandemic, even if these were merely matters of opinion or unsettled scientific claims that would later turn out to be right. They did this while simultaneously receiving pressure from the Biden administration to clamp down on COVID misinformation. And that's not the only time we've seen tech platforms act under the influence of politicians or powerful activists, or apply their rules in what seem like selective ways.
But Paul is wrong that removing Section 230 would somehow remedy this situation. Without Section 230, heavy-handed content moderation policies and political "jawboning" would get so much worse.
More Sex & Tech News
OpenAI launches age prediction for ChatGPT:
We're rolling out age prediction on ChatGPT to help determine when an account likely belongs to someone under 18, so we can apply the right experience and safeguards for teens.
Adults who are incorrectly placed in the teen experience can confirm their age in Settings > Account.…
— OpenAI (@OpenAI) January 20, 2026
FTC appeals Meta monopoly ruling: The Federal Trade Commission has decided to further drag out its doomed case against Facebook and Instagram parent company Meta. The agency announced on Tuesday that it is appealing a November 2025 federal court finding in Meta's favor in an antitrust case that a federal court had already dismissed once.
Brothel doctors meet historical fiction: The Double Standard Sporting House "centers on a pioneering female doctor treating women, primarily sex workers, in the maelstrom of post-Civil War New York City," per a review in The Arts Fuse. It sounds interesting, if you can forgive a sex-trafficking subplot that seems a bit like "white slavery" panic.
AI for good: Doctors are using AI to develop treatments for rare diseases like the one—DeSanto-Shinawi syndrome—that plagued baby Jorie, with some focusing on babies in intensive care and pediatric patients.
British dad "horrified" by politicians capitalizing on daughter's death to push social media regulation: "The father of a teenager who took her own life after viewing suicide and self-harm content online has said banning under-16s from social media would be wrong," reports the BBC. "Ian Russell, the father of Molly Russell, told BBC's Newscast that the government should enforce existing laws rather than 'implementing sledgehammer techniques like bans.'"
Some evidence suggests that "Gen-Z college interns and recent graduates are the first workers being affected by AI," and this "is surprising," writes Jeffrey Selingo at New York magazine. "Historically, major technological shifts favored junior employees because they tend to make less money and be more skilled and enthusiastic in embracing new tools." This time, however:
… a study from Stanford's Digital Economy Lab in August showed something quite different. Employment for Gen-Z college graduates in AI-affected jobs, such as software development and customer support, has fallen by 16 percent since late 2022. Meanwhile, more experienced workers in the same occupations aren't feeling the same impact (at least not yet), said Erik Brynjolfsson, an economist who led the study. Why the difference? Senior workers, he told me, "learn tricks of the trade that maybe never get written down," which allow them to better compete with AI than those new to a field who lack such "tacit knowledge." For instance, that practical know-how might allow senior workers to better understand when AI is hallucinating, wrong, or simply not useful.
Selingo goes on to look at how colleges and universities are adapting, presenting a not-totally-pessimistic picture.
"Are You Dead?" A popular app in China will let your friends know that you're not dead.
Editor's Note: As of February 29, 2024, commenting privileges on reason.com posts are limited to Reason Plus subscribers. Past commenters are grandfathered in for a temporary period. Subscribe here to preserve your ability to comment. Your Reason Plus subscription also gives you an ad-free version of reason.com, along with full access to the digital edition and archives of Reason magazine. We request that comments be civil and on-topic. We do not moderate or assume any responsibility for comments, which are owned by the readers who post them. Comments do not represent the views of reason.com or Reason Foundation. We reserve the right to delete any comment and ban commenters for any reason at any time. Comments may only be edited within 5 minutes of posting. Report abuses.
Please to post comments
Willie Nelson "I woke up still not dead again today"
https://www.youtube.com/watch?v=J34esa_aJxc
phish "Waking Up Dead"
https://www.youtube.com/watch?v=03yho-gRKvE&list=RD03yho-gRKvE&start_radio=1
§230 is only "necessary" because the government makes it impossible for customers to sue social media companies for violating their own terms of service.
There's a concept called something like "fit for purpose" or "implied merchantability" or ... something. IANAL. But if a company advertises a product as doing something it doesn't do, like milk fortified with vitamins, that's fraud.
So it is with social media companies who advertise themselves as the way to stay connected with family, friends, and customers. Their terms of service are so broad and nebulous that no one really knows what they mean, other than "we can fuck you over and you can't touch us." And they are right, because the government makes it hideously slow and expensive to sue the bastards when they close an account without warning. Businesses can't afford to spend millions of dollars and years of legal quibbling to contest the closing.
So the government graciously allows §230 to exist so the plebs think they have a way to force companies to behave. Except it doesn't work any better, and it doesn't let people force social media companies to obey their own terms of service.
Fuck §230.
the legal framework undergirding the internet for three decades.
el oh el.
Edit: If you'd have told me when I read this article that the same publication would call this law the legal framework "undergirding the internet" I'd have thought you were taking crazy pills.
> "Are You Dead?" A popular app in China will let your friends know that you're not dead.
How many accounts is the CCP operating?
Let me check with the DNC.
Friends =/= people you know.
If you need an app to tell people you know that you aren't dead...
Well of course YouTube had no problem censoring content for the Biden administration and the Covidians. As Reason explained it's all good because they're a private company and fuck you that's why. That censorship arguably led to old people being summarily executed with tubes jammed down their throats among other crimes against humanity. But that was the old YouTube. Now they are champions of free speech and are too principled to censor obviously false libelous bullshit. I don't know the libertarian answer to this conundrum. What I do know is that Reason has zero credibility on the subject.
Arrogance? Paul usually seems to realize the enormous amount of discretion that goes into determining whether speech is legal or not, the difficulties that this presents for tech companies, and the reasons it's better to err on the side of allowing more speech, not less.
"Err on the side of allowing more speech" you say?
*thinks*
*ctrl-f COVID 1/2*
If I stretch my mind back to ancient times and "the twitter files" the 'erring' was always 'less speech', government-demanded censorship-- which you tards always screeched 'SEXSHUN TOOO THIRTEE' in nearly every instance when it was pointed out.
It's funny how the 'erring' always aligns with one ideology and one ideology only. "erring" for more censorship when it's
citizen journalistsconservative youtubers, and "erring" for less censorship when it's not.Ambulance chasers are lawyers who follow after people injured in ambulances in hopes of a payout after an injury that may or may not have been someone's fault. Implicitly, they're ambulance chasers rather than respectable defenders of the law because they aren't very good at determining fault.
Borrowing a turn of phrase, Section 230 defenders are good at one thing; watching the government break your legs, offer you a crutch, and saying, "See? Aren't you glad they defended you from the ambulance chasers?"
>But Google does not simply take down content because someone tells them it's defamatory. Can you imagine the abuse of process that would encourage? No one could say anything negative, partisan, or controversial at all without someone alleging it was defamatory and company policy requiring the content to come down.
Except they extend extra 'courtesies' to those on the Progressive Left.
This is a main reason why I don't trust politicians. I only trust the Constitution because once the ink was dry it was set in stone and not easily "disappeared." If we could magically "disappear" all of the unconstitutional acts of Congress; and all of the unconstitutional executive orders; and all of the unconstitutional legislation from the bench; then ninety percent of the Code of Federal Regulations and Supreme Court rulings would disappear and liberty would prevail again, at least for a while. Rand was ineffective at promoting liberty before his defection and will remain ineffective after.
For those who need a refresher (if not, skip to the next section):
Ctrl+f 'cubb': 0 results
Ctrl+f 'prodi': 0 results
For those who need a refresher of what S230 is really about, read up on Cubby v. Compuserve and Prodigy v. Stratton Oakmont. Then read about how Reps. Ron Wyden and Chris Cox passed the CDA specifically to reverse both decisions. The read how SCOTUS, in 1997, the same year the CDA passed, struck down all of the CDA except S230, in ACLU v. Reno.
Then, decide for yourself whether free speech on the internet existed or look around and see if S230 thwarted the trolls and ambulance chasers.
Section 230 is a 'loophole' that allows these companies to have their cake and eat it too, and it's not the first amendment of the internet.
The first amendment is.
Why is it different when Google lets someone say statement A on their platform that's incendiary, and then shuts down someone else saying statement B that's incendiary? Why, it's because they structured their TOS to allow subset A and disallow subset B!
Is that not editorializing? Do they not ban users or block content that they consider against their TOS? Last I checked, they certainly do. Yet they are still 'protected' by section 230 despite that viewpoint consideration.
Paul is wrong here, I think, since he got the video taken down by 'normal' means but it doesn't change the fact that section 230 is absurd either.
"Ian Russell, the father of Molly Russell, told BBC's Newscast that the government should enforce existing laws rather than 'implementing sledgehammer techniques like bans.'"
Like this one?
Defamation is a crime. And Paul is not without options for addressing it.
ENB, did an intern steal your byline? You should know better than that. No one goes to jail/prision (yet) for saying bad things about a senator, "provably false" or not. Possibly get sued, but it's never a crime.
This is the part where you find the poster (i.e the actual villain) to go after instead of trying to use a 3rd-Party median as your "easy" button. Which is precisely what Section 230 is about.
Sorry. I'm going to have to go with Reason on this one. Not even Rand Paul as much I like him gets the "easy" button of controlled media.