Censorship

The E.U. Wants to Censor 'Terrorist Content' Online. What Could Go Wrong?

There's no room for errors and online platforms face huge fines, likely encouraging overly broad takedowns.

|

Censorship
Klevo / Dreamstime.com

Social media platforms continue to struggle with the unenviable balancing act that pits free expression against content moderation. The European Union may soon make this endeavor all the more fraught with its proposal to deputize service providers as censors of terrorist content.

The plan, which was unveiled last year, would obligate "hosting service providers" like Facebook and Twitter to remove any "information which is used to incite and glorify the commission of terrorist offences, encouraging the contribution to and providing instructions for committing terrorist offences as well as promoting participation in terrorist groups" within one hour or face severe financial penalties up to 4 percent of the service provider's global annual turnover.

E.U. member states would be required to create or designate a "competent authority" to issue takedown requests to hosting service providers—in other words, to instruct websites to censor content. Additionally, hosting service providers would be expected to develop "proactive measures" to prevent terrorist content from being posted in the future.

Lawmakers hope to push this plan through before E.U. elections in May. The terrorist content law is being considered as the E.U. moves to finalize its Article 13 copyright licensing scheme. If passed, these E.U. laws would be a one-two punch against free expression on the open internet.

As many have pointed out, the censorship proposal is at odds with free speech. Much non-"terrorist" content may be caught in a too-broad net, and the bucket of prohibited speech could easily expand in tandem with state objectives in the future.

The bill creates incentives for platforms to be extra-censorious because the penalties for deviation are steep and the timeframe afforded (one hour) is scant. Better to be safe than sorry.

Consider the definition of "terrorist content" provided by the draft legislation. As Joan Barata of Stanford Law School's Center for Internet and Society points out, the rules would prohibit platforms from allowing any questionable posts to stay online, regardless of intent.

This is a slight, but important, deviation from the rules that currently regulate terrorist content online in the E.U. established by EU Directive 2017/541. That language specifies that content is legally problematic only when combined with the "intent to incite" terrorist acts (emphasis added).

There are other problems with that definition, but it at least tries to separate content intended to educate or critique from things like targeted propaganda. The new proposed definition affords no such wiggle room, and posts that merely report on such activities could be snuffed out by censors.

"Counter-terrorist" content could easily be stifled as well. We have a case study in Germany. The nation passed an anti-"hate speech" bill called NetzDG aimed at censoring speech online. (This law is actually a model for the new E.U. rules, although it affords platforms a relatively-more-reasonable 24 hours to take down content.)

While the rules may have been aimed at curbing the spread of messages from groups like the nationalistic Alternative für Deutschland (AfD) party, content from people mocking AfD have likewise been censored. From the point of view of a platform overseeing millions of posts a day, there is little difference. If they see a plausibly hateful (or "terroristic") post, they will pull it down.

Obviously, it would be impossible for even the best-funded online platform to hire enough moderators to manually inspect each post for E.U.-prohibited content within an hour. This means that the E.U.'s proposal is effectively a mandate for algorithmic pre-screening.

Enter the "hash database" tool the E.U. wants to beef up to accomplish their goals of censoring the 'net.

In 2016, leading platforms like Facebook, Twitter, and YouTube saw Big Brother's writing on wall. Rather than waiting for regulation, they tried to put together a tool that would satisfy political leaders' hunger for data suppression on their own.

When platforms identify prohibited content, they create a "hash"—like a digital fingerprint—of that image that is added to a database. Other platforms can use those hashes to identify content that violates their Terms of Service and block it or take it down.

At least, this is the idea. Platforms report they have identified and tagged some 80,000 pieces of terrorist content as of 2018. Presumably, this has made their voluntary moderation of terrorist content more efficient. But as the Center for Democracy and Technology has pointed out, there is not much transparency into what is tagged and why. It is unclear the extent to which these efforts have actually helped to stem terrorism online. Was this ever a serious undertaking, or mostly good PR?

The E.U. seems to think the hash database is the genuine article, but that it does not go far enough.

Article 6 of the proposed rules requires that hosting service providers adopt "proactive measures"—like algorithmic filtering and blocking—to moderate content. This many end up compelling all platforms to use something like a hash database, or build their own tool that mimics it.

While the draft has some language about transparency and appeals, it is an afterthought. Given the inter-connected nature of posting and content creation, many of us may feel the effects of these rules even though we are not E.U. citizens.

This brings us to the second major problem with the anti-terrorism plan: it would fundamentally change service providers' legal obligations regarding user-submitted content on their platform.

Platforms, and not merely users, would be liable for content. This is in stark contrast to current rules and norms which largely absolve platforms of preemptive legal responsibility for user content.

In the United States, internet providers are protected from legal liability for user-submitted content by a law called Section 230 of the Communications Decency Act. By freeing internet companies from the threat of financial ruin for the actions of their users, this law has allowed the internet ecosystem to flourish.

The E.U. has a similar law, and it is called Article 14 of the Ecommerce Directive. Article 14 provides some liability protection for bodies acting as "mere conduits" of data transmission. This means that Facebook cannot be held criminally responsible for failing to prevent illegal content on their platform—so long as the platform acts "expeditiously" to take down the post upon being notified.

The proposed terrorist content regulation states that it "should not affect the application of Article 14." But it is hard to see how it could not, given the proposal's mandate that platforms preemptively block and almost immediately take down unwanted but possibly legal content or face major penalties.

It's easy to see why the E.U.'s proposed terrorist content regulation is bad for free speech. It's also another blow to the global internet.

Yet again, busybodies across the pond are making political decisions that will have serious ramifications for internet users in the U.S. This censorious infrastructure can easily be turned on us. Given the costs for deviation, it almost certainly will be, since it may be easier to just apply this to all users rather than take the time to determine each post's provenance.

Let's hope this dumb idea just dies off. It would be at least one less flaming tire on the garbage heap of bad technology policy.