There's been ample cheering over the internet intermediary company Cloudflare canceling services to the controversial 8chan heir Kiwi Farms. If history is any indication, however, Cloudflare's decision will do little to stop online hate and harassment. Meanwhile, it moves us yet another step further from the sort of neutrality that's typically guided internet infrastructure companies (that is, things like web hosting, cybersecurity, and newsletter services). And the further we stray from this neutrality, the worse the consequences for all sorts of online speech and organizing.
Cloudflare is a private business, and perfectly free to drop services to any entity it likes. But whether it should—and whether it made the right call with Kiwi Farms—is up for debate.
Until recently, Cloudflare—a popular provider of all sorts of back-end services that keep websites running smoothly—provided security services for Kiwi Farms, an internet forum that The Guardian calls "the worst place on the web." In effect, Cloudflare helped Kiwi Farms avoid the kind of distributed denial of service (DDoS) attacks that could take the platform down.
Kiwi Farms was founded by a former 8chan administrator and known as a space for people to plot organized harassment campaigns against disfavored groups or individuals. Recently, "farmers" targeted Canadian activist and Twitch streamer Clara "Keffals" Sorrenti. They coordinated takedown requests for her Twitch account, doxxed her, and conspired to get a SWAT team sent to her house, among other dastardly things.
In response, Sorrenti—who was arrested after someone called police pretending to be her and threatening violence—led a campaign to get Cloudflare to stop servicing Kiwi Farms.
Initially, Cloudflare resisted. Its founder and CEO Matthew Prince—who has described himself as "almost a free-speech absolutist"—and Head of Public Policy Alissa Starzak wrote an August 31 blog post explaining Cloudflare's content moderation policies and why they believe "that voluntarily terminating access to services that protect against cyberattack is not the correct approach."
"Some argue that we should terminate these services to content we find reprehensible so that others can launch attacks to knock it offline," noted Prince and Starzak. "That is the equivalent argument in the physical world that the fire department shouldn't respond to fires in the homes of people who do not possess sufficient moral character. Both in the physical world and online, that is a dangerous precedent, and one that is over the long term most likely to disproportionately harm vulnerable and marginalized communities."
Cloudflare hasn't always acted in accordance with its professed principle against terminating services. In 2017, it canceled the account of the neo-Nazi site Daily Stormer. And in 2019, it canceled services for the fringe-right forum 8chan.
The Daily Stormer popped back up elsewhere. So did 8chan. Which brings us to an uncomfortable truth about the global internet era.
"If the new 8chan met the fate of the old 8chan, what's to stop a new Kiwi Farms from cropping up in short order?" as Freddie deBoer put it. "It turns out that extremism isn't so easily stamped out in a digitally connected world; you can kill a site, maybe, but until you kill the human attitudes that such a site reveals, you have never actually solved the problem."
In other words, the problem isn't the expression of extremist views, it's the extremist views. Cutting out one digital avenue for offensive speech or organized harassment does nothing about the underlying sentiments, and may only harden people in their extremist viewpoints. Feeling marginalized and attacked is powerful at building community and solidarity. (See, e.g., former President Donald Trump.)
To their credit, Cloudflare executives seem to recognize this, as well as the danger in starting to choose customers based on those customers' viewpoints.
Is Cloudflare Like a Phone Company?
After terminating services for 8chan and the Daily Stormer, "we saw a dramatic increase in authoritarian regimes attempting to have us terminate security services for human rights organizations — often citing the language from our own justification back to us," write Prince and Starzak in their August 31 blog post.
These past experiences led Cloudflare executives to conclude "that the power to terminate security services for the sites was not a power Cloudflare should hold," write Prince and Starzak. "Not because the content of those sites wasn't abhorrent — it was — but because security services most closely resemble Internet utilities."
"Just as the telephone company doesn't terminate your line if you say awful, racist, bigoted things, we have concluded in consultation with politicians, policy makers, and experts that turning off security services because we think what you publish is despicable is the wrong policy," they added.
This seems like a prudent policy and the one most protective of speech broadly—including, yes, neo-Nazi and troll speech but also all the speech that most people would recognize as desirable and even vital.
(Notably, Prince and Starzak write that they will "terminate security services for content which is illegal in the United States," including "content subject to Fight Online Sex Trafficking Act (FOSTA). But, otherwise, we believe that cyberattacks are something that everyone should be free of." FOSTA said Section 230 doesn't apply where prostitution is involved. What a crazy world we live in where platforms promoting consensual adult sex work are essentially barred by law from being protected from cyberattacks…)
Kiwi Farms: An 'Immediate Threat to Human Life"?
But does Kiwi Farms present an exceptional case? Cloudflare's CEO seems to think so.
On September 3, Cloudflare decided to terminate service to Kiwi Farm. "We have blocked Kiwifarms. Visitors to any of the Kiwifarms sites that use any of Cloudflare's services will see a Cloudflare block page and a link to this post," wrote Prince in another blog post.
It does not seem to be a decision that the company took lightly. "This is an extraordinary decision for us to make and, given Cloudflare's role as an Internet infrastructure provider, a dangerous one that we are not comfortable with," Prince wrote. "However, the rhetoric on the Kiwifarms site and specific, targeted threats have escalated over the last 48 hours to the point that we believe there is an unprecedented emergency and immediate threat to human life unlike we have previously seen from Kiwifarms or any other customer before."
Prince insisted that it wasn't merely "revolting content" that had prompted the company to act and that the decision did not come in response to the pressure campaign led by Sorrenti.
"While we believe that in every other situation we have faced — including the Daily Stormer and 8chan — it would have been appropriate as an infrastructure provider for us to wait for legal process, in this case the imminent and emergency threat to human life which continues to escalate causes us to take this action," wrote Prince.
"Hard cases make bad law," he added. "This is a hard case and we would caution anyone from seeing it as setting precedent."
Setting a Precedent?
It's reassuring to see Cloudflare take seriously the larger implications of cutting off service to Kiwi Farms and to see its CEO say this is not some new norm for the company. But even if Prince doesn't want or intend for it to be precedent-setting, that may not be something in his control.
Each time an internet infrastructure company does this, the easier it is for that company and others like it to justify doing likewise in the future. (Already, the Internet Archive has disappeared Kiwi Farms from its archives.) And the easier it is for people angry at any platform's existence to demand these companies do so.
Cloudflare and other private actors are well within their rights to terminate services to web platforms they find offensive. It isn't illegal "censorship," a First Amendment violation, or an antitrust violation (all claims that tend to get made whenever a tech company gives a controversial figure or entity the boot). Nor is it a situation in which Cloudflare is running afoul of Section 230.
When Cloudflare canceled services to 8chan, my colleague Scott Shackford wrote that "refusing to associate itself with a site that hosts messages it finds offensive isn't much different from a baker or T-shirt maker refusing customers that ask them to make cakes or shirts that contain messages that they dislike." Offensive sites may find others to host and service them, just as people may go elsewhere for cakes and T-shirts. But "the point here," writes Shackford, "is that every company whose business model operates on the transmission of messages has the ability to decide what its limits are—if any."
That's not wrong. One one level, Cloudflare's decision is a sign that free markets and free association are thriving. But it also brings up more fundamental questions about how a company like Cloudflare should operate.
Is it more like a cake shop, or is it more like a phone company?
Mandating that it act like a phone company would be deeply problematic. But an internet in which infrastructure companies choose to operate more like the phone company is certainly a less risky one for free speech and political organizing broadly. Once we start holding companies like Cloudflare responsible for the content of every single site it services, that path seems unlikely to end at the worst sites on the internet.