The Volokh Conspiracy

Mostly law professors | Sometimes contrarian | Often libertarian | Always independent

Volokh Conspiracy

First thoughts on the section 230 executive order

It's all about demanding transparency from the powerful

|

For all the passion it has unleashed, President Trump's executive order on section 230 of the Communications Decency Act is pretty modest in impact.  It doesn't do anything to undermine the part of section 230 that protects social media from liability for the things that its users say. That's paragraph (1) of section 230(b), and the order practically ignores it.

Instead, the order is all about paragraph (2), which protects platforms from liability when they remove or restrict certain content: "No provider or user of an interactive computer service shall be held liable on account of  … any action voluntarily taken in good faith to restrict access to or availability of material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable."

This makes some sense in terms of the President's grievance.  He isn't objecting to Twitter's willingness to give a platform to people he disagrees with.  He objects to Twitter's decision to cordon off his speech with a fact-check warning, as well as all the other occasions on which Twitter and other social media platforms have taken action against conservative speech. So it makes sense for him to focus on the provision that seems to immunize biased and pretextual decisions to downgrade viewpoints unpopular in the Valley.

(I note here that the existence of a liberal bias in the application of social media content mediation is heavily contested, especially by commentators on the left. They point out, correctly, that the evidence of a left-leaning bias is anecdotal and subjective. Of course the same could be said of left-leaning bias in media outlets like the Washington Post or the New York Times. I'm friends with many reporters who deny such a bias exists. Yet most readers of these and other traditional media recognize that there is bias at work there—rarely reporting the facts, but often in deciding which stories are newsworthy, or how the facts are presented, or past events are summarized. If you are sure there's no bias at work in the mainstream press, then I can't persuade you that the same dynamic is at work on social media's content moderation teams.  But if you have seen even a glimmer of liberal bias in the New York Times, you might ask yourself why there would be less in the decisions of Silicon Valley's content police, whose decisions are often made in secret by unaccountable young people who have not been inculcated in a journalistic ethic of objectivity.)

What's interesting and useful in the order's focus on content derogation is that it addresses precisely the claim that anticonservative bias isn't real. For it is aimed at bringing speech suppression decisions into the light, where we can all evaluate them.

In fact, that's pretty much all it's aimed at.  The order really only has two and a half substantive provisions, and they're all designed to increase the transparency of takedown decisions.

The first provision tells NTIA (the executive branch's liaison to the FCC) to suggest a rulemaking to the FCC. The purpose of the rule is to spell out what it means for the tech giants to carry out their takedown policies "in good faith." The order makes clear the President's view that takedowns are not "taken in good faith if they are "deceptive, pretextual, or inconsistent with a provider's terms of service" or if they are "the result of inadequate notice, the product of unreasoned explanation, or [undertaken] without a meaningful opportunity to be heard." This is not a Fairness Doctrine for the internet; it doesn't mandate that social media show balance in their moderation policies. It is closer to a Due Process Clause for the platforms.  They may not announce a neutral rule and then apply it pretextually. And the platforms can't ignore the speech interests of their users by refusing to give users even notice and an opportunity to be heard when their speech is suppressed.

The second substantive provision is similar. It asks the FTC, which has a century of practice disciplining the deceptive and unfair practices of private companies, to examine social media takedown decisions through that lens.  The FTC is encouraged (as an independent agency it can't be told) to determine whether entities relying on section 230 "restrict speech in ways that do not align with those entities' public representations about those practices."

(The remaining provision is an exercise of the President's sweeping power to impose conditions on federal contracting. It tells federal agencies to take into account the "viewpoint-based speech restrictions imposed by each online platform" in deciding whether the platform is an "appropriate" place for the government to post its own speech. It's hard to argue with that provision in the abstract. Federal agencies have no business advertising on, say, Pornhub. In application, of course, there are plenty of improper or unconstitutional ways the policy could play out. But as a vehicle for government censorship it lacks teeth; one doubts that the business side of these companies cares how many federal agencies maintain their own Facebook pages or Twitter accounts. And in any event, we'll have time to evaluate this sidecar provision when it is actually applied.)

That's it.  The order calls on social media platforms to explain their speech suppression policies and then to apply them honestly. It asks them to provide notice, a fair hearing, and an explanation to users who think they've been treated unfairly or worse by particular moderators.

I've had many conversations with participants in the debate over the risks arising from social media's sudden control of what ordinary Americans (or Brazilians or Germans) can say to their friends and neighbors about the issues of the day. That is a remarkable and troubling development for those of us who hoped the internet would bring a flowering of  views free from the intermediation of traditional sources. But you don't have to be a conservative to worry about how this unprecedented power could be abused.

In another context, I have offered a rule of thumb for evaluating new technology: You don't really know how evil a technology can be until the engineers who depend on it for employment begin to fear for their jobs.  Today, social media's power is treated by the companies themselves as a modest side benefit of their astounding rise to riches; they can stamp out views they hate as a side gig while tending to the real business of extending their reach and revenue. But every one of us should wonder, "How they will use that power when the ride ends and their jobs are at risk?" And, more to the point, "How will we discover what they've done?"

Such questions explain why even those who don't lean to the right think that the companies' control of our discourse needs more scrutiny. There are no easy ways to discipline the power of Big Tech in a country that has a first amendment, but the answer most observers offer is more transparency.

We need, in short, to know more about when and how and why the big platforms decide to suppress our speech.

This executive order is a good first step toward finding out.