Section 230

Section 230 and the Curse of Politics

Gonzalez v. Google presents the Supreme Court’s first opportunity to weigh in on Section 230.


There's an old saying that goes, "How do you know when a politician is lying? His lips are moving." These days, one can ask, "How do you know when Section 230 is being misunderstood?" and answer, "A politician is talking about it."

Adopted in 1996, Section 230 was proposed as a way to counter efforts to censor internet speech. Its authors, then–Reps. Chris Cox and Ron Wyden, did this by walking a delicate line. Their legislative language promoted the ​​development of parental controls and filtering as an alternative to government censorship, and encouraged online platforms to allow free communication by immunizing them from liability for hosting speech by third parties. Crucially, Section 230 also ensured online platforms' ability to regulate posts that violate their terms of service.

Later this month, the Supreme Court will consider how to interpret Section 230. Gonzalez v. Google involves family members of victims who died in the 2015 Islamic State terrorist attacks in Paris. They claim that YouTube (owned by Google) "aided and abetted" the crimes by allowing the Islamic State to use the video platform to recruit members and to communicate its messages. This contributed to terrorist acts, according to the complaint, because YouTube automatically recommends content to users based on their viewing habits.

Gonzalez v. Google has attracted widespread attention because it presents the Supreme Court's first opportunity to weigh in on the statute—and because Section 230 has been at the center of a larger political debate regarding internet regulation for years.

Section 230's Bipartisan Antagonism

Strong opinions about Section 230 are commonplace on both sides of the aisle. Just before losing the 2020 election, then–President Donald Trump put it bluntly on Twitter: "REPEAL SECTION 230!!!" He also issued an executive order that led to a Federal Communications Commission proceeding to "reinterpret" Section 230, and at the end of 2020, he vetoed the National Defense Authorization Act in part because it did not include repeal of the provision.

Such antipathy does little to distinguish Trump from President Joe Biden, who told the New York Times editorial board before the 2020 election that "Section 230 should be revoked immediately." Not much has changed since he's taken office. Biden used a White House "listening session" last fall to make a similar point, and in January, he published an op-ed piece in the Wall Street Journal where he demanded, among other things, that "we must fundamentally reform Section 230."White House renews call to 'remove' Section 230 liability shield—POLITICO

But while progressives and conservatives are united in their antipathy for Section 230, they attack the law for different reasons—all of which are misguided. As a report in Bloomberg put it: "Democrats say too much hate, election meddling, and misinformation get through, while Republicans claim their ideas and candidates are censored." In other words, liberals generally attack the part of Section 230 that protects online companies from liability for the third-party content they host, while conservatives want to weaken the provision that ensures online platforms' ability to enforce their own terms of service.

What they have in common is that both sides want to increase the government's ability to control perhaps the most influential communications medium that has ever existed—a rare instance of bipartisan agreement. Progressives advocate modifying or repealing Section 230 to incentivize—that is, coerce—privately owned platforms into restricting content progressives believe is wrong or harmful. Conservatives, on the other hand, advocate modifying or repealing Section 230 to make the companies more vulnerable to claims the content that conservatives like is being "unfairly" moderated.

The monster under the bed, of course, is "Big Tech"—another convenient political label—and the framing of the issue fuels the various narratives for why Section 230 reform is purportedly needed.

One claim is that Section 230 is an antiquated law, adopted in the mid-1990s when the internet was just emerging, and that Congress must update it to keep up with technology and its then-unimagined uses. Another is that Section 230 is a perk Congress adopted to nurture emerging internet businesses that have become behemoths and no longer need such support. A more cynical version is that Section 230 is just another chit in the great Washington game of carrot-and-stick that lawmakers can manipulate to condition behavior, justified as compelling tech companies to "earn" their legal protections. The political reasoning is crude, but usually effective: If you know we can inflict pain, you will do what we want.

Few could have imagined in 1996 what the internet would become over the course of a generation. At that time, less than 15 percent of the U.S. population had even used the internet. Search engines were just becoming a thing. The term "social media" was still years away from common parlance; Facebook would not emerge until eight years later. Even the iPhone was more than a decade away from launching, and almost all the platforms that now keep people's noses glued to their screens were on the far side of the horizon.

We Need Section 230 Now More Than Ever

Rather than rendering Section 230 "antiquated," this dramatic evolution underscores the need for the immunities the law provides.

Even at the internet's nascent state of development in 1997, the first federal appellate court to consider the scope of Section 230 immunity explained in Zeran v. America Online, Inc. why it provides indispensable protection for online freedom of expression. The U.S. Court of Appeals for the 4th Circuit observed that service providers' inability to screen each of the millions of postings they may host requires that they make "an on-the-spot editorial decision whether to risk liability by allowing [their] continued publication" or else yield to the "natural incentive simply to remove messages upon notification, whether the contents were [unlawful] or not."

Simple math dictates the outcome: If there is the slightest chance you might shoulder legal accountability for what you let people post on your platform, you are not going to risk it.

Time and technology have not altered this essential calculus—except to make it more compelling. Compared to the millions of postings envisioned by the court that first interpreted Section 230, online platforms must now assess their potential liability risks from untold billions. To take just one example, users upload more than 500 hours of third-party content to YouTube per minute. That works out to 30,000 hours of new content per hour, and 720,000 hours per day.

Sure, these giant platforms use sophisticated algorithms to help screen what gets posted, but that fact does not affect the underlying rationale of Section 230. The larger the platform, the greater the risk of liability—and the greater the need for protection.

Politicians can't abide by anything they see as outside their ability to control. The internet caught Congress unaware, and it has been trying to play catch-up ever since. The government's default position for exerting authority over any new medium is to find a way to censor it. Congress first adopted a measure to prohibit "indecent" communications online (oddly, as part of the same law that included Section 230), but the Supreme Court declared that provision unconstitutional in 1997. Congress dusted itself off and tried again the following year with the Child Online Protection Act but it, too, was met with invalidation as a violation of the First Amendment in 2008.

Section 230 was the exception to the legislative branch's reflexive response to any new communications medium, and it was based on an explicit policy of promoting freedom of expression by preserving what the law describes as "the vibrant and competitive free market that presently exists for the internet and other interactive computer services, unfettered by Federal or State regulation." It says something that the most successful federal policy for the internet to date has been the decision not to regulate it.

Given this background, it should send up more than a few red flags when you consider how many of the current proposals to regulate social media and to "reform" Section 230 are billed as measures to protect free speech on the internet.

The Future of Section 230

It's not that the internet doesn't have problems, or that some of the large tech companies haven't bungled their attempts to manage the flow of online traffic on their platforms. There is genuine reason for concern when platforms make moderation decisions about what speech is allowed on those fora.

Those decisions can be maddeningly opaque and arbitrary—and if you aren't Donald Trump, you probably don't have the option of galumphing off to start your own social media platform. But faced with the reality that someone must make those decisions, the question is how to do that in a system dedicated to preserving freedom of expression.

The free speech problem is not that the myriad platforms have different ways of explaining and enforcing their house rules. It is that governments at various levels are looking for ways to horn in on the business.

Last fall, Twitter head Elon Musk began releasing, via a network of journalists, what became known as the Twitter Files, detailing efforts by various federal authorities to nudge or pressure takedown decisions or speaker bans on such topics as the January 6 insurrection, Hunter Biden's laptop, COVID policy, and a range of other subjects. While it is fair to criticize Musk for the way he selectively made this information available to journalists sympathetic to his position, the problem is a serious one. If unexplained moderation decisions by private businesses are cause for concern, you should really begin to worry when the man behind the curtain is with the government.

The dozens of bills introduced to modify or repeal the law generally seek ways to make overt what has up to now been covert: handing control over the various rules for what gets posted online to the government. In some cases, legislators introduce bills mainly as a threat to large tech companies for not playing ball, just to show them who's boss. Either way, the goal is to assert governmental authority over the most powerful communications medium in history, either formally or informally.

Given partisan gridlock, the chance of enacting legislation is probably remote, which means the most likely prospect for immediate change in the scope of Section 230 immunity lies in the Supreme Court. When the Justices consider Gonzalez v. Google later this month, will they view the automatic recommendations that algorithms make as an extension of editorial choices for how information is presented and thereby protected under Section 230? Or will they view such recommendations as falling outside the law's immunity shield? If the Court decides Section 230 immunity should be narrowed, it will upend settled expectations formed by hundreds of lower court decisions and transform the way online platforms operate in making any recommendations.

However the Court construes Section 230 in Gonzalez v. Google, an even bigger challenge to online free speech will likely reach it next term in cases asking whether the First Amendment will allow Florida and Texas to regulate political speech on the internet.

The stakes could not be higher. These cases will test the limits of what the Supreme Court meant in Packingham v. North Carolina back in 2017, when it warned that courts must exercise "extreme caution" before ratifying attempts to regulate online speech. They also will test the underlying assumptions that motivated the adoption of Section 230 in the first place: that the internet flourished because it was unfettered by federal or state regulation.

The alternative will be to leave the future of freedom of speech in the hands of politicians. I shudder at the thought.

The author of this piece submitted an amicus brief supporting Google in Gonzales v. Google on behalf of the Chamber of Progress, as did Reason Foundation, the nonprofit that publishes Reason.