The Volokh Conspiracy

Mostly law professors | Sometimes contrarian | Often libertarian | Always independent

Cloudflare Says Its Online Security Services Won't Be Canceled Based on a Site's Ideology

"Just as the telephone company doesn't terminate your line if you say awful, racist, bigoted things, we have concluded ... that turning off security services because we think what you publish is despicable is the wrong policy. To be clear, just because we did it in a limited set of cases before doesn't mean we were right when we did. Or that we will ever do it again."

|

Posted by Cloudflare yesterday; I think this is the right decision, in part for reasons I sketch in my Reverse Spiderman Principle article (and see also this Twitter thread from Daphne Keller (Stanford)):

Giving everyone the ability to sign up for our services online also reflects our view that cyberattacks not only should not be used for silencing vulnerable groups, but are not the appropriate mechanism for addressing problematic content online. We believe cyberattacks, in any form, should be relegated to the dustbin of history.

The decision to provide security tools so widely has meant that we've had to think carefully about when, or if, we ever terminate access to those services. We recognized that we needed to think through what the effect of a termination would be, and whether there was any way to set standards that could be applied in a fair, transparent and non-discriminatory way, consistent with human rights principles.

This is true not just for the content where a complaint may be filed  but also for the precedent the takedown sets. Our conclusion — informed by all of the many conversations we have had and the thoughtful discussion in the broader community — is that voluntarily terminating access to services that protect against cyberattack is not the correct approach.

Avoiding Abuse of Power

Some argue that we should terminate these services to content we find reprehensible so that others can launch attacks to knock it offline. That is the equivalent argument in the physical world that the fire department shouldn't respond to fires in the homes of people who do not possess sufficient moral character. Both in the physical world and online, that is a dangerous precedent, and one that is over the long term most likely to disproportionately harm vulnerable and marginalized communities.

Today, more than 20 percent of the web uses Cloudflare's security services. When considering our policies we need to be mindful of the impact we have and precedent we set for the Internet as a whole. Terminating security services for content that our team personally feels is disgusting and immoral would be the popular choice. But, in the long term, such choices make it more difficult to protect content that supports oppressed and marginalized voices against attacks.

Refining our policy based on what we've learned

This isn't hypothetical. Thousands of times per day we receive calls that we terminate security services based on content that someone reports as offensive. Most of these don't make news. Most of the time these decisions don't conflict with our moral views. Yet two times in the past we decided to terminate content from our security services because we found it reprehensible. In 2017, we terminated the neo-Nazi troll site The Daily Stormer. And in 2019, we terminated the conspiracy theory forum 8chan.

In a deeply troubling response, after both terminations we saw a dramatic increase in authoritarian regimes attempting to have us terminate security services for human rights organizations — often citing the language from our own justification back to us.

Since those decisions, we have had significant discussions with policy makers worldwide. From those discussions we concluded that the power to terminate security services for the sites was not a power Cloudflare should hold. Not because the content of those sites wasn't abhorrent — it was — but because security services most closely resemble Internet utilities.

Just as the telephone company doesn't terminate your line if you say awful, racist, bigoted things, we have concluded in consultation with politicians, policy makers, and experts that turning off security services because we think what you publish is despicable is the wrong policy. To be clear, just because we did it in a limited set of cases before doesn't mean we were right when we did. Or that we will ever do it again….

Regulatory realities

Our policies also respond to regulatory realities. Internet content regulation laws passed over the last five years around the world have largely drawn a line between services that host content and those that provide security and conduit services. Even when these regulations impose obligations on platforms or hosts to moderate content, they exempt security and conduit services from playing the role of moderator without legal process. This is sensible regulation borne of a thorough regulatory process.

Our policies follow this well-considered regulatory guidance. We prevent security services from being used by sanctioned organizations and individuals. We also terminate security services for content which is illegal in the United States — where Cloudflare is headquartered. This includes Child Sexual Abuse Material (CSAM) as well as content subject to Fight Online Sex Trafficking Act (FOSTA). But, otherwise, we believe that cyberattacks are something that everyone should be free of. Even if we fundamentally disagree with the content.

In respect of the rule of law and due process, we follow legal process controlling security services. We will restrict content in geographies where we have received legal orders to do so. For instance, if a court in a country prohibits access to certain content, then, following that court's order, we generally will restrict access to that content in that country. That, in many cases, will limit the ability for the content to be accessed in the country. However, we recognize that just because content is illegal in one jurisdiction does not make it illegal in another, so we narrowly tailor these restrictions to align with the jurisdiction of the court or legal authority.

While we follow legal process, we also believe that transparency is critically important. To that end, wherever these content restrictions are imposed, we attempt to link to the particular legal order that required the content be restricted. This transparency is necessary for people to participate in the legal and legislative process. We find it deeply troubling when ISPs comply with court orders by invisibly blackholing content — not giving those who try to access it any idea of what legal regime prohibits it. Speech can be curtailed by law, but proper application of the Rule of Law requires whoever curtails it to be transparent about why they have.