Reforming section 230 of the Communications Decency Act

Why are we giving the world's biggest tech companies litigation subsidies they don't need?

|The Volokh Conspiracy |

Section 230 of the Communications Decency Act seems to inspire bipartisan antipathy.

Joe Biden said it "should be revoked, immediately," and President Trump characteristically topped that by tweeting "REVOKE 230!"

They're both right, at least directionally. Section 230, which dates to 1996,  shields platforms from civil liability stemming from third-party content on their sites. It has been central to the success of crowdsourced platforms like YouTube, Twitter and Facebook, protecting them from potentially staggering liability for the online misbehavior of their users. But by exempting the platforms from the usual rules of liability, Section 230 is also a kind of subsidy, and one that protects some of the biggest companies in the world from expensive litigation.

Such a subsidy made more sense in 1996, when there were only 36 million internet users in the world. Now that 4.6 billion people regularly go online, it's fair to ask why the U.S. should give internet platforms a sweeping exemption from the laws that govern everyone else. In recent years, more and more politicians on either side of the aisle have been pointedly asking this question, though perhaps for different reasons: Some Democrats still blame social media for making Trump's election possible, while Republicans fault the industry for how publicly it has regretted its role in the 2016 campaign.

But revoking Section 230 is not a good idea. Mad as people may be at the platforms, those companies will need at least some liability protection if they're going to keep giving us the crowdsourced content we all consume today. Platforms particularly need protection from defamation liability, which falls on both the author and the publisher. No social media company can police its content for libelous posts. The platforms simply are not equipped to evaluate which statements are true and which are false and defamatory.

That doesn't end the debate, though. The industry may need an exemption from defamation liability, but why should it be immune if it ignores user misconduct that is entirely predictable and largely preventable? That question led Congress, on an overwhelmingly bipartisan basis, to amend Section 230 by  adopting FOSTA, the Allow States and Victims to Fight Online Sex Trafficking Act of 2017. FOSTA withdrew immunity from online platforms that knowingly let their users facilitate sex trafficking. And it turned out that most platforms didn't need protection for facilitating sex trafficking. A few online sites that depended on prostitution ad revenue went out of business, but Big Social Media continued to thrive.

So the next questions for Section 230 skeptics on both sides of the aisle should be, "Where else has Congress given social media an immunity it didn't need? And how can policymakers chip away at this overgenerous subsidy without putting at risk the survival of social media?"

The most thoughtful answers I've seen come from a Justice Department report released this month. Without grandstanding, it offers several proposals that ought to have bipartisan appeal.

The report begins by acknowledging that social media companies still need protection from liability for things they can't be expected to police, like defamation. But the report sees a vast difference between being unable to stop criminal behavior and actively promoting it, as the sex trafficking sites did before FOSTA. It draws a simple lesson in FOSTA's success—the online platforms worth propping up don't need immunity for facilitating or soliciting illegal conduct.

The Justice Department also suggests that platforms should be required to go further in regulating user conduct— they should face liability if they fail to take reasonable steps to prevent the distribution of child sex abuse materials and terrorist and cyberstalking content. The only thing surprising about this proposal is that Section 230 doesn't already demand it; the law confers its immunity without asking the platforms to do anything at all in return. It's past time to spell out exactly what is expected of platforms in exchange for the subsidy they receive. Reasonable efforts to stop things like child sex abuse is certainly not too much to ask.

Among its other suggestions for trimming Section 230 immunity, the report rejects the extreme applications of the law that have gained currency since 1996. Most notably, online platforms have argued that Section 230 creates an immunity from antitrust claims. This is outrageous. If today's monolithic platforms use their control of the national discourse to suppress criticism of their power or praise for their competitors, they don't deserve immunity; they deserve an injunction and treble damages. Similarly, there's no justification for extending platforms' defamation immunity to the point where they can ignore court libel rulings without consequences (as, for example, Yelp has done).

So far, so bipartisan. If a reformed Section 230 forces social media to be more cautious about facilitating criminal conduct online, neither party will weep. There's less unanimity about reforming a second immunity granted by Section 230. (Yes, there are two!) The second immunity protects the platforms not when they allow speech but when they suppress it. Conservatives think (rightly, in my view) that Silicon Valley tilts against them in these decisions, whether the result is a takedown or a warning label or a "shadow ban."

The second immunity protects online platforms from liability when they take down content that is sexual, violent, harassing or "otherwise objectionable," as long as they act in "good faith." In an age when everyone objects to everything, this language invites weaponization. It is only prudent for Congress to narrow the definition of "otherwise objectionable" speech so that the provision gives special protection to the platforms mainly when they're taking down speech that violates the law.

Republicans who think they've been victimized by social media censorship will of course find something to support here, but so too can anyone else uncomfortable with letting a handful of Silicon Valley monoliths decide what can and can't be said online. For starters, unlike the first immunity, which protects against the clear risk of ruinous defamation liability, it's not even fully clear what lurking liability the second immunity is needed to head off. Successful lawsuits for refusing to publish someone else's work are not exactly thick on the ground.

The Justice Department's other recommendation here is to attach some more tangible standards to the statute's requirement that content be policed in "good faith." To meet this requirement, the department urges, content moderation policies should be stated "plainly and with particularity" and takedown decisions should be notified in a timely way and explain "with particularity the factual basis for the restriction." This will certainly be popular on the right, which thinks it's unfairly targeted for suppression. But plenty of speakers on the left feel the same way. The only obvious cure for the widespread mistrust of platforms is for them to embrace greater transparency and candor. They have resisted, sometimes with good reason, sometimes without; but "trust us" is no longer a persuasive argument. This proposal would encourage them to move away from their largely opaque content moderation practices.

In short, and surely surprising to some, this Justice Department has made a real contribution toward bipartisan reform of Section 230. The temptation among Democrats will be to score partisan points by dismissing the report.

That would be a mistake.

Because, having embraced a candidate tied to the unrealistic position that "section 230 should be revoked, immediately," Democrats are going to need more workable solutions that keep the essential core of Section 230 while cutting  back the platforms' now-unjustifiable government subsidy.

And, when they go looking for those ideas, they're going to find that a big chunk of them are already in this report.

NEXT: 7 Race-Neutral Solutions to Racially Skewed Law Enforcement

Editor's Note: We invite comments and request that they be civil and on-topic. We do not moderate or assume any responsibility for comments, which are owned by the readers who post them. Comments do not represent the views of Reason.com or Reason Foundation. We reserve the right to delete any comment for any reason at any time. Report abuses.

  1. Remember Fax Machines? They could be used for all the nefarious purposes that the internet can be — and yet the TELCOs were considered common carriers and couldn’t arbitrarily shut off phone lines.

    I like the common carrier model.

    1. Bad analogy. There were no discussion forums using fax machines.

  2. It’s been noted that there can be difficulties in trying to draw the line of avoiding “political bias” while still allowing moderation for matters that most folks on the right *and* left alike would want. (This is notably different from the first amendment’s lines, drawn by SCOTUS, for state actors.)

    Krystal Ball on Hill TV recently discussed an interesting approach: she was basically saying that if viewpoint has support from a substantial portion of society, then it should be engaged – rather than shunned. She was talking about general social discourse, but it got me thinking, could something like that be codifiable for internet or employment law? I don’t know what the limiting language there could be…perhaps some kind of reference to positions of “broad consensus” or “lacking substantial political dispute”?

    1. Tyranny of the majority. Fuck off, slaver.

    2. Yes, Dr. Ed, something akin to the common carrier model makes a lot of sense, because these dominating cyber companies are the new version of video-supporting phone lines or cable channels on which subscribers determine or add to content.  But the internet giants had been defined as common carriers from mid June 2016 until the end of 2017, after which I’ve lost track!

      Few discussions touch the real politics, economics, and agenda behind the scenes, but Baker’s post is a good thought-piece on how to drain the bathwater without throwing out the baby.   I wonder, as things now stand, were Section 230 to be revoked instead of revised along lines similar to those discussed above, and the wall of big rich tech communication platforms loses all liability immunity protections, wouldn’t this gift it with the best excuse to censor all content it Progressively deems unpopular and offensive–  for the greater good of all, of course, and couched as simple corporate self-protection?

      1. Comment meant for Dr. Ed. above. Also, June 2016 is supposed to be June 2015.

  3. I think in general that social media companies should be given a choice between acting as utilities with limited liability for content but with strong utility-style regulation including nondiscrimination policies, and acting as publishers with full editorial control but also full liability.

    1. I don’t. If you think that this comment is a tort, then sue me, not Reason.

      1. That’s the argument for public utility. If a town puts up a billboard and you pin up something libelous, you get sued.

  4. Everything I’ve heard says that sites like BackPage were prosecuted *without* FOSTA. It was unnecessary. I see no reason to further roll back 230.

    You think this law is a subsidy? Do you consider the First Amendment to be a “subsidy” of your own expressive activity?

    1. Not only were they prosecuted for allegedly encouraging sex trafficking without FOSTA, the prosecution was / is a sham. The truth is that BackPage was actively cooperating with the FBI to identify and stop trafficking for years before the government turned on them.

      There are at least a half dozen girls who were rescued from genuine involuntary trafficking specifically because BackPage brought their trafficker’s ads to the attention of the FBI.

      1. Do you have a source for that second paragraph? I’ve heard the argument that Backpage was useful because they helped concentrate things in one place, and they would cooperate with investigative demands, but I had not heard that they were proactively bringing cases to the attention of the authorities on their own.

  5. I completely disagree.

    1. The definition of “terrorism” is always changing depending and subject to political whims. If Trump had his way he would have Barr go after all protesters. I don’t want to open that door. As for CP and other stuff, the posters are still liable for that so the government is not stopped from going after them.

    2. As for anti-trust, also no. The internet is so big and so far free, so one company can not dominate the conversation. More government regulation could hurt that.

    3. Online platforms should have a right to establish their own standards about what they will not host. I don’t want the government mandating that sites can not remove content that violates their standards. There are plenty of other sites to post on if someone can not post on a particular site.

    4. Forcing sites to police their content would increase their costs and make it more difficult for new sites to rise up. That would be bad for all. If conservatives or liberals don’t like how Facebook or Twitter is treating them, then they can create their own sites, and they do, and let the free market decide how popular those sites are.

    5. FOSTA was not all good. Prostitution will never be eliminated and it just deprived them of a safer place to meet clients. It also drove the victims of trafficking ever farther out of reach of those who could help them. I think the value was not worth the cost.

    Section 230 is working fine and has lead to the free and open internet that we have now. I like it that way and I am ok with it not being perfect.

    1. FOSTA also assumed everybody could agree on what was legal and what was not.

  6. “It is only prudent for Congress to narrow the definition of ‘otherwise objectionable’ speech so that the provision gives special protection to the platforms mainly when they’re taking down speech that violates the law.”

    No. Not at all. Swearing is legal. Nude pics are legal. Racist statements are legal. A suggestion that a politician deserves to be fed into a woodchipper is legal. That doesn’t mean that sites should be required to put up with such things unless they want to lose immunity by taking them down.

    The *entire point* of 230 was to allow sites to moderate legal but objectionable content without fearing liability. The law where it originated was the Communications Decency Act.

  7. > one that protects some of the biggest companies in the world from expensive litigation.

    And every single website that lets users post comments. Section 230 was the only good part of the CDA.

    Stop trying to control how people talk, statist.

  8. This all assumes there are solid hard bright lines defining defamation / libel / yuck. There aren’t. BackPage et all may have had some direct solicitations; I don’t know, having never checked. But I was aware of the Berkeley Barb back in the day, and I bet almost all “offensive” BackPage ads were worded too cleverly to be direct solicitations, you’ve added personal interpretation to the mix, and now it’s all down to what pisses off some DA anywhere in the country; one DA who had an eye on that good looking assistent DA who turned out to be trans.

    Free speech is free speech. Every attempt to whittle it away turns it into permissive speech.

  9. The major reform needed to Section 230 is to modify it so that “good faith” requires the platform owner to honor any representations it makes about content neutrality or about its rules, whether in contract language or advertising. In particular, the practice of “shadowbanning” (making it look to a user as if his post was made successfully, but then hiding that post from others) needs to be banned as a form of fraud, period.

    1. Fraud is already actionable. Your objection is not that fraud be banned, since shadow banning is not fraud. If platforms misrepresent their standards, policies, etc. they are already liable. Why more government?

    2. Agreed. Shadowbanning is fraud.

  10. “Such a subsidy made more sense in 1996, when there were only 36 million internet users in the world. Now that 4.6 billion people regularly go online…”

    There’s no reasoning here so I’m interested in what holds this argument up. As access increases doesn’t The need for immunity increase? 230 protects selective editors. As the volume of content submitters increases, the difficulty of selective editing increases too.

  11. “ If today’s monolithic platforms…”

    Was this intentional?

  12. “ It is only prudent for Congress to narrow the definition of “otherwise objectionable” speech so that the provision gives special protection to the platforms mainly when they’re taking down speech that violates the law.”

    The editors of this website take down speech that does not violate the law. That’s their right. Partisan websites that choose what voices can be heard should be allowed to do that. Permitting the government (or private parties) to exploit those decisions will not end well for anybody.

    Are the spammers on this site doing something illegal? If not, should Volokh Conspiracy be liable for any speech in the comments on a showing that it policed spam?

  13. “ To meet this requirement, the department urges, content moderation policies should be stated “plainly and with particularity” and takedown decisions should be notified in a timely way and explain “with particularity the factual basis for the restriction.””

    Why should content moderation standards be a matter of DOJ concern, rather than a purely private matter between content hosts and their users? Is this a content users bill of rights? What’s the driving need for government involvement?

  14. Section 230 does not “subsidize” anything. All it does is lay out and protect absolutely a fundamental principal of innocence on the internet – which, if I may add, is exactly what all good laws should strive to do. The fundamental principal that it protects in this way is the principal that, when it comes to online content, if you didn’t post it, YOU DIDN’T DO IT. This principal holds true even for platform owners and hosts, regardless of how unpopular the platform may be at any given moment, and Section 230 protects all of them without regard to current popularity – at least it did, until FOSTA passed.

    On what planet could this possibly be described as a “subsidy?” What exactly is being “subsidized” by this law, the right of the unpopular to enjoy their innocence?

    As someone else asked above, would you consider the First Amendment to be a “subsidy?” How about the Fourth Amendment?

    The only way anyone can seriously see Section 230 as nothing more than a “subsidy” is if one equates protecting the innocent with subsidizing guilt. Need I point out how many large steps this point of view takes toward presuming guilt itself?

  15. We wouldn’t be in this situation if the companies weren’t censoring already as prophylactic against changes to 230.

    We are seeing Democrats threaten it if they don’t play ball with censoring harrassment, “Oh,” and they are quick to point out, “our opponents speech is harrassment, so you’d better silence them.”

    The Republicans play the same game but for the opposite reason: threaten 230 because the Democrats are effectively shutting them down.

    So what are these companies to do? One goes full censorship, the other says it won’t censor politician speech no matter how outrageous, relying on the idea you should indeed see everything your politicians blabber.

    The only certainty is government is coercing censorship in a backhanded way by threatening 230.

    Here’s the correct solution: jail the politicians for using laws to coerce censorship. No 230 changes. No breakup threats.

    This should be done, but won’t.

  16. For god’s sake, we all know Trump’s views on lawsuits to people who say things he doesn’t like. But did nobody watch the Democratic debates, where they fell all over each other to threaten these companies for not censoring “harrassment”? The piece de resistance was Harris directly threatening transparently unconstitutional laws to hurt them if they don’t. By the way, she’s on the short list for Biden’s VP, who might very well win, and might very well not finish his term, leaving a censor as president.

    There are no “well meaning” changes to 230 in this context.

    Step away from the Constitution. You people are ill-serving freedom.

  17. I not too disturbed by proposals to cut back on section 230, it was a benefit conferred by government and lobbyists that helped a fledgling industry bloom. Now that it’s bloomed into a giant corpse flower that never stops blooming and stinking. Twitter, Facebook and Google in particular.

    I was particularly disturbing by the memos flying around Google by executives that their efforts to use their platform to help Hillary’s election were not effective enough, and they need to start planning to do better next time. If the same memos were flying around Oil company boardrooms, with memos on how to shutdown gas stations strategically before elections to effect the vote there would be enormous outrage and legislation , even if it was perfectly legal like Googles’ interventions.

    1. If you gut Section 230, Twitter, Facebook and Google will still have the right to criticize Trump, because of the First Amendment. You realize that, right?

      And if you hate those companies so much, why would you impose a huge additional liability on any startups wanting to compete with them?

      Those 3 companies are going to be around whether or not a repeal passes. But I wonder if this very comments section would survive.

    2. it was a benefit conferred by government

      It seems odd that the government saying, “We won’t punish you” is a “benefit conferred by government.”

Please to post comments