Rand Paul Turns Against Section 230, Citing YouTube Video Accusing Him of Taking Money From Maduro
After Google refused to take down a video of him, the Kentucky senator suggested upending the legal framework undergirding the internet for three decades.
Sen. Rand Paul (R–Ky.) has long been one of the few refreshing voices out of Washington, D.C., when it comes to free speech, including free speech on social media and elsewhere in the digital realm. He was one of just two senators to vote against FOSTA, the law that started the trend of trying to carve out Section 230 exceptions for every bad thing.
As readers of this newsletter know, Section 230 has been fundamental to the development and flourishing of free speech online.
Now, Paul has changed his mind about it. "I will pursue legislation toward" ending Section 230's protections for tech companies, the Kentucky Republican wrote in the New York Post this week.
You are reading Sex & Tech, from Elizabeth Nolan Brown. Get more of Elizabeth's sex, tech, bodily autonomy, law, and online culture coverage.
A Section 230 Refresher
For those who need a refresher (if not, skip to the next section): Section 230 of the Communications Act protects tech companies and their users from frivolous lawsuits and spurious charges. It says: "No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider." If someone else is speaking (or posting), they—not you or Instagram or Reddit or YouTube or any other entity—are legally liable for that speech.
Politicians, state attorneys general, and people looking to make money off tech companies that they blame for their troubles hate Section 230. It stops the latter—including all sorts of ambulance-chasing lawyers—from getting big payouts from tech companies over speech for which these companies merely served as an unwitting conduit. It stops attorneys general from making good on big, splashy lawsuits framed around fighting the latest moral panic. And it prevents politicians from being more in control of what we all can say online.
If a politician doesn't like something that someone has posted about them on the internet, doesn't like their Google search results, or resents the fact that people can speak freely—and sometimes falsely—about political issues, it would be a lot easier to censor whatever it is that's irking them in a world without Section 230. They could simply go to a tech platform hosting that speech and threaten a lawsuit if it was not removed.
Tech platforms might very well win many such lawsuits on First Amendment grounds, if they had the resources to fight them and chose that route. But it would be a lot easier, in many cases, for them to simply give in and do politicians' bidding, rather than fight a protracted lawsuit. Section 230 gives them the impetus to resist and ensures that any suits that go forward will likely be over quickly, in their favor.
But here's the key: Section 230 does not stop authorities from punishing companies for violations of federal law, and it does not stop anyone from going after the speakers of any illegal content. If someone posts a true threat on Facebook, they can still be hauled in for questioning about it. If someone uses Google ads to commit fraud, they're not magically exempted from punishment for that fraud. And if someone posts a defamatory rant about you on X, you can still sue them for that rant.
Enter Rand Paul
Paul is understandably upset about a video about him that has been posted to YouTube.
The video "is a calculated lie, falsely accusing me of taking money from Venezuela's Nicolás Maduro," wrote Paul in the Post. "It is, of course, a ludicrous accusation, but paid trolls are daily spreading this lie across the internet. This untruth is essentially an accusation of treason, which then leads the internet mob to call for my death."
In short, it is "a provably false defamatory video," according to Paul.
Defamation is a crime. And Paul is not without options for addressing it.
For one, he can use his own speech—as he is doing—to counter the false information. Paul has his own channels of communication, huge audiences on social media, and relatively easy access to mainstream media outlets, like the Post. He is not without options for correcting the record here.
He could also threaten to sue the creators(s) of the video. Sometimes, the threat of legal action is enough to get results—and in fact, that's what happened here.
"The individual who posted the video finally took down the video under threat of legal penalty," per Paul's Post op-ed.
If the mere threat hadn't worked, Paul could have actually sued the creator(s) of the video. If he successfully proved the video was defamatory, a court would order the creator to remove it.
Imagine the Alternatives
If assistance from Google—YouTube's parent company—is needed to comply with a court order of removal, the company is "prepared to comply," per its webpage on defamation and YouTube policies.
But Google does not simply take down content because someone tells them it's defamatory. Can you imagine the abuse of process that would encourage? No one could say anything negative, partisan, or controversial at all without someone alleging it was defamatory and company policy requiring the content to come down.
Nor is Google "in a position to adjudicate the truthfulness of postings," either. It is not a court of law, and it should not be asked to act as one. It would be unfair to content creators and distributors, or to anyone alleging defamation, for one thing. It would also be unfair to Google, or any other tech company in the content facilitation business.
If tech companies were expected to adjudicate defamation claims themselves, they would have to employ a massive army of staffers for said purposes—which would, at the very least, be a big drain on their resources. Smaller companies, unable to afford it, would likely just take down most of what was flagged to them. Meanwhile, at bigger places, corporate culture would surely necessitate an overly cautious approach, leading to all sorts of content being taken down despite being true and/or legal.
Companies are burdened. Creators are burned (and it seems fair to assume repercussions from the platforms might extend beyond simply removing one video). Way more legal content winds up suppressed than defamatory content gets caught.
Some might suggest that the process wouldn't work for take-down requests generally—but surely a politician's or public figure's claims of defamation should carry extra weight?
But politicians and public figures have the most to lose from content that is not defamatory but does portray them or their associates in a negative light.
I'm not suggesting Paul is lying here, but I am certain that some politicians and public figures would lie. And giving them carte blanche to demand take-downs of "defamatory" content would create a de facto censorship process.
This would especially harm people with unpopular views or those trying to expose abuses of power.
Libel Isn't An Easy Call
Some people imagine that it's easy to determine whether speech about a public figure is libelous or slanderous. It goes like this: Is it true or not? The end.
But being false doesn't automatically make a statement defamatory.
The false information must also harm the reputation of the subject. The speaker must have at least acted "negligently" in spreading it. And when it comes to information about a public official or other public figure, there must also be "actual malice" involved.
Someone who repeats false information about a public figure without doing so maliciously is not guilty of defamation.
That's one of the reasons we can't expect tech companies to adjudicate defamation allegations fairly. It's hard enough for courts to determine whether someone spread false information in good faith, thinking it was true, or spread it with the intent to cause damage. How are tech companies—with no power to compel testimony or anything like that—supposed to make that call?
There are good reasons for this malice plank. Without it, it would be very hard for journalists or anyone else to report on the news, discuss political developments, and so on. Because, inevitably, doing so means getting the story wrong some of the time. Responsible journalists and content creators remove false content when they learn the story has changed and issue corrections. This allows both free speech and the truth to flourish. Punishing people for accidentally saying or publishing anything incorrect about public figures would not.
Besides, not all statements are so easy to categorize. Many are matters of opinion more than fact. Some rely on hyperbole. Some are parody. Again, it doesn't seem quite within a tech company's wheelhouse to parse the finer parts of speech in this way. Again, an expectation that digital platforms simply remove defamatory content when asked would wind up with a lot of First Amendment–protected speech being squelched.
What Changed?
I can understand why Paul is frustrated about what happened with this video and the fact that he couldn't get it taken down quickly. He writes of it resulting in "threats on [his] life" and condemns "the arrogance of Google to continue hosting this defamatory video."
Arrogance? Paul usually seems to realize the enormous amount of discretion that goes into determining whether speech is legal or not, the difficulties that this presents for tech companies, and the reasons it's better to err on the side of allowing more speech, not less. His past statements suggest he understands the purpose of Section 230 and the reasons it is important.
"I always believed this protection is necessary for the functioning of the internet," he wrote in the Post. "The courts have largely ruled that Section 230 shields social-media companies from being sued for content created by third parties. If someone calls you a creep on the internet, you can call them a creep right back, but you can't sue the social-media site for hosting that insult."
This suggests Paul knew the way this played out for defamation claims. If someone defames you on the internet, you can push back on the claim or you can sue the speaker of it, but you can't sue the social media site for hosting that speaker.
What changed? Someone posted something particularly inflammatory about Paul, something Paul calls defamation against him. Suddenly, the normal rules governing such interactions shouldn't apply?
Someone posted false information about Paul and then removed that information without Google doing anything. To me, that does not suggest we need to upend the legal framework that's been undergirding the internet for three decades. In fact, some might even say that it's evidence the current system works—or that it is, in any event, better than alternatives.
Right Diagnosis, Wrong Cure
It's easy, in discussions like these, to lose sight of the larger picture. But let's keep in mind that what Paul is suggesting—revising or removing Section 230 protections when it comes to defamation—isn't just going to affect the Rand Pauls and Googles of the world.
It also isn't likely to stop people from spreading false information about public figures.
After all, the internet is expansive and global. Even if Paul could successfully sue Google over YouTube's hosting of this video, who's to say the creator wouldn't simply post it elsewhere? Perhaps many places. Perhaps on a platform hosted overseas.
Even if the creator stopped spreading it, or Google stopped hosting it, would its removal from YouTube really stop any false information from spreading? It's already out there. People would still be talking about it on X, on Facebook, etc. Would Paul sue all of those people and those platforms, too? And would that even work? Presumably, many of the people re-sharing this information believe it is true and are spreading it maliciously.
So, we don't actually stop the information from spreading. Meanwhile, however, tech companies—especially smaller ones—start freaking out. They start pulling down more and more content. It becomes difficult for all but the most mainstream media outlets and voices to report on politicians or share news about public figures, even when their information is truthful. If becomes difficult for all but the biggest tech companies to risk allowing this sort of speech from independent content creators. And it becomes easier for powerful people to control and shape the public discourse.
Paul spends a lot of his Post op-ed critiquing the way Google handled content during the COVID-19 pandemic. And he's right: All too often, tech companies—including but not limited to Google—suppressed statements that countered conventional wisdom about the pandemic, even if these were merely matters of opinion or unsettled scientific claims that would later turn out to be right. They did this while simultaneously receiving pressure from the Biden administration to clamp down on COVID misinformation. And that's not the only time we've seen tech platforms act under the influence of politicians or powerful activists, or apply their rules in what seem like selective ways.
But Paul is wrong that removing Section 230 would somehow remedy this situation. Without Section 230, heavy-handed content moderation policies and political "jawboning" would get so much worse.
More Sex & Tech News
OpenAI launches age prediction for ChatGPT:
We're rolling out age prediction on ChatGPT to help determine when an account likely belongs to someone under 18, so we can apply the right experience and safeguards for teens.
Adults who are incorrectly placed in the teen experience can confirm their age in Settings > Account.…
— OpenAI (@OpenAI) January 20, 2026
FTC appeals Meta monopoly ruling: The Federal Trade Commission has decided to further drag out its doomed case against Facebook and Instagram parent company Meta. The agency announced on Tuesday that it is appealing a November 2025 federal court finding in Meta's favor in an antitrust case that a federal court had already dismissed once.
Brothel doctors meet historical fiction: The Double Standard Sporting House "centers on a pioneering female doctor treating women, primarily sex workers, in the maelstrom of post-Civil War New York City," per a review in The Arts Fuse. It sounds interesting, if you can forgive a sex-trafficking subplot that seems a bit like "white slavery" panic.
AI for good: Doctors are using AI to develop treatments for rare diseases like the one—DeSanto-Shinawi syndrome—that plagued baby Jorie, with some focusing on babies in intensive care and pediatric patients.
British dad "horrified" by politicians capitalizing on daughter's death to push social media regulation: "The father of a teenager who took her own life after viewing suicide and self-harm content online has said banning under-16s from social media would be wrong," reports the BBC. "Ian Russell, the father of Molly Russell, told BBC's Newscast that the government should enforce existing laws rather than 'implementing sledgehammer techniques like bans.'"
Some evidence suggests that "Gen-Z college interns and recent graduates are the first workers being affected by AI," and this "is surprising," writes Jeffrey Selingo at New York magazine. "Historically, major technological shifts favored junior employees because they tend to make less money and be more skilled and enthusiastic in embracing new tools." This time, however:
… a study from Stanford's Digital Economy Lab in August showed something quite different. Employment for Gen-Z college graduates in AI-affected jobs, such as software development and customer support, has fallen by 16 percent since late 2022. Meanwhile, more experienced workers in the same occupations aren't feeling the same impact (at least not yet), said Erik Brynjolfsson, an economist who led the study. Why the difference? Senior workers, he told me, "learn tricks of the trade that maybe never get written down," which allow them to better compete with AI than those new to a field who lack such "tacit knowledge." For instance, that practical know-how might allow senior workers to better understand when AI is hallucinating, wrong, or simply not useful.
Selingo goes on to look at how colleges and universities are adapting, presenting a not-totally-pessimistic picture.
"Are You Dead?" A popular app in China will let your friends know that you're not dead.
Show Comments (8)