The E.U. Wants to Censor 'Terrorist Content' Online. What Could Go Wrong?

There's no room for errors and online platforms face huge fines, likely encouraging overly broad takedowns.


Klevo / Dreamstime.com

Social media platforms continue to struggle with the unenviable balancing act that pits free expression against content moderation. The European Union may soon make this endeavor all the more fraught with its proposal to deputize service providers as censors of terrorist content.

The plan, which was unveiled last year, would obligate "hosting service providers" like Facebook and Twitter to remove any "information which is used to incite and glorify the commission of terrorist offences, encouraging the contribution to and providing instructions for committing terrorist offences as well as promoting participation in terrorist groups" within one hour or face severe financial penalties up to 4 percent of the service provider's global annual turnover.

E.U. member states would be required to create or designate a "competent authority" to issue takedown requests to hosting service providers—in other words, to instruct websites to censor content. Additionally, hosting service providers would be expected to develop "proactive measures" to prevent terrorist content from being posted in the future.

Lawmakers hope to push this plan through before E.U. elections in May. The terrorist content law is being considered as the E.U. moves to finalize its Article 13 copyright licensing scheme. If passed, these E.U. laws would be a one-two punch against free expression on the open internet.

As many have pointed out, the censorship proposal is at odds with free speech. Much non-"terrorist" content may be caught in a too-broad net, and the bucket of prohibited speech could easily expand in tandem with state objectives in the future.

The bill creates incentives for platforms to be extra-censorious because the penalties for deviation are steep and the timeframe afforded (one hour) is scant. Better to be safe than sorry.

Consider the definition of "terrorist content" provided by the draft legislation. As Joan Barata of Stanford Law School's Center for Internet and Society points out, the rules would prohibit platforms from allowing any questionable posts to stay online, regardless of intent.

This is a slight, but important, deviation from the rules that currently regulate terrorist content online in the E.U. established by EU Directive 2017/541. That language specifies that content is legally problematic only when combined with the "intent to incite" terrorist acts (emphasis added).

There are other problems with that definition, but it at least tries to separate content intended to educate or critique from things like targeted propaganda. The new proposed definition affords no such wiggle room, and posts that merely report on such activities could be snuffed out by censors.

"Counter-terrorist" content could easily be stifled as well. We have a case study in Germany. The nation passed an anti-"hate speech" bill called NetzDG aimed at censoring speech online. (This law is actually a model for the new E.U. rules, although it affords platforms a relatively-more-reasonable 24 hours to take down content.)

While the rules may have been aimed at curbing the spread of messages from groups like the nationalistic Alternative für Deutschland (AfD) party, content from people mocking AfD have likewise been censored. From the point of view of a platform overseeing millions of posts a day, there is little difference. If they see a plausibly hateful (or "terroristic") post, they will pull it down.

Obviously, it would be impossible for even the best-funded online platform to hire enough moderators to manually inspect each post for E.U.-prohibited content within an hour. This means that the E.U.'s proposal is effectively a mandate for algorithmic pre-screening.

Enter the "hash database" tool the E.U. wants to beef up to accomplish their goals of censoring the 'net.

In 2016, leading platforms like Facebook, Twitter, and YouTube saw Big Brother's writing on wall. Rather than waiting for regulation, they tried to put together a tool that would satisfy political leaders' hunger for data suppression on their own.

When platforms identify prohibited content, they create a "hash"—like a digital fingerprint—of that image that is added to a database. Other platforms can use those hashes to identify content that violates their Terms of Service and block it or take it down.

At least, this is the idea. Platforms report they have identified and tagged some 80,000 pieces of terrorist content as of 2018. Presumably, this has made their voluntary moderation of terrorist content more efficient. But as the Center for Democracy and Technology has pointed out, there is not much transparency into what is tagged and why. It is unclear the extent to which these efforts have actually helped to stem terrorism online. Was this ever a serious undertaking, or mostly good PR?

The E.U. seems to think the hash database is the genuine article, but that it does not go far enough.

Article 6 of the proposed rules requires that hosting service providers adopt "proactive measures"—like algorithmic filtering and blocking—to moderate content. This many end up compelling all platforms to use something like a hash database, or build their own tool that mimics it.

While the draft has some language about transparency and appeals, it is an afterthought. Given the inter-connected nature of posting and content creation, many of us may feel the effects of these rules even though we are not E.U. citizens.

This brings us to the second major problem with the anti-terrorism plan: it would fundamentally change service providers' legal obligations regarding user-submitted content on their platform.

Platforms, and not merely users, would be liable for content. This is in stark contrast to current rules and norms which largely absolve platforms of preemptive legal responsibility for user content.

In the United States, internet providers are protected from legal liability for user-submitted content by a law called Section 230 of the Communications Decency Act. By freeing internet companies from the threat of financial ruin for the actions of their users, this law has allowed the internet ecosystem to flourish.

The E.U. has a similar law, and it is called Article 14 of the Ecommerce Directive. Article 14 provides some liability protection for bodies acting as "mere conduits" of data transmission. This means that Facebook cannot be held criminally responsible for failing to prevent illegal content on their platform—so long as the platform acts "expeditiously" to take down the post upon being notified.

The proposed terrorist content regulation states that it "should not affect the application of Article 14." But it is hard to see how it could not, given the proposal's mandate that platforms preemptively block and almost immediately take down unwanted but possibly legal content or face major penalties.

It's easy to see why the E.U.'s proposed terrorist content regulation is bad for free speech. It's also another blow to the global internet.

Yet again, busybodies across the pond are making political decisions that will have serious ramifications for internet users in the U.S. This censorious infrastructure can easily be turned on us. Given the costs for deviation, it almost certainly will be, since it may be easier to just apply this to all users rather than take the time to determine each post's provenance.

Let's hope this dumb idea just dies off. It would be at least one less flaming tire on the garbage heap of bad technology policy.

NEXT: Netflix Bows to the Saudis

Editor's Note: We invite comments and request that they be civil and on-topic. We do not moderate or assume any responsibility for comments, which are owned by the readers who post them. Comments do not represent the views of Reason.com or Reason Foundation. We reserve the right to delete any comment for any reason at any time. Report abuses.

  1. Demands for instant responses and the possibility of huge fines will lead to a culture of quick, thoughtless removal of messages.

    So…. Just like Twitter in the US?

    1. Check out this SNL skit mocking Twitter

      Can I Play That? – SNL


      1. That is actually pretty good satire right there.

        How did that make it on Saturday Night Live?

        1. Some leftists may see the absurdity of the SJW’s demands

          1. Hopefully the new regulations will be implemented in the States as well, so we can do a better job of cracking down on some of the inappropriate forms of “expression” that keep cropping up here at NYU and, I’m told, on other college campuses located in various regions of our great nation. See the documentation of America’s leading criminal “satire” case at:


            1. Google is now paying $17000 to $22000 per month for working online from home. I have joined this job 2 months ago and i have earned $20544 in my first month from this job. I can say my life is changed-completely for the better! Check it out whaat i do…..

              click here ======?? http://www.payshd.com

    2. On sunday my check was 1500$ just do work on this website few hour and Earn Easily at home on laptop online .This is enough for me and my family.

      >>=====>>>> http://www.Theprocoin.com

    3. Google is now paying $17000 to $22000 per month for working online from home. I have joined this job 2 months ago and i have earned $20544 in my first month from this job. I can say my life is changed-completely for the better! Check it out whaat i do…..

      click here ======?? http://www.Aprocoin.com

  2. I’m not sure why they would want to use hashes for something is short as a tweet. At some point it has to be just as efficient to match the characters or words as it is to run a hash that is sufficiently large to avoid collisions in a space as large as Twitter or Facebook.

  3. Andrea O’Sullivan explains how this is likely to go horribly wrong.

    Yeah well, life’s a bitch sometimes.

  4. Socialism/Communism in action!!!

    They achieve orgasm by controlling the population with group think !!!

  5. So my first thought is that the best way to comply is to deplatform all socialists, lawyers, politicians, and liberals.

  6. Tougher penalties mean nobody will do bad things! (read: social media will ban literally anything questionable to cover their own asses).

  7. “Trust us”

  8. We all know how this will go down…

    Taking down radical islamic posts: It’s racism, put it back up!

    Take it down, and charge them for: Pro gun sentiment, pro free speech, anti mass immigration, etc.

    So good bye Yellow Vest protestors! See ya later AFD!

    Etc etc etc.

  9. Let’s hope this dumb idea just dies off.

    Or we could convince the Powers That Be it’s a terrorist idea and then *they* will kill it!

  10. ?Google pay 95$ consistently my last pay check was $8200 working 10 hours out of every week on the web. My more young kin buddy has been averaging 15k all through ongoing months and he works around 24 hours consistently. I can’t confide in how straightforward it was once I endeavored it out.This is my primary concern…GOOD LUCK .

    click here =====?? http://www.Geosalary.com

  11. Demands for instant responses and the possibility of huge fines will lead to a culture of quick, thoughtless removal of messages.

    Oh noes! I can’t imagine what this speculative scenario would look like!

  12. Pretty much every movement towards freedom was considered an act against the state.

  13. The way the internet is recognized is perfect for censorship and to violate our rights to free speech.

    Instead of being recognized as a public place, where we have public rights, each website free to the public is a private place where the owners are on the hook for what you and I say.

    I don’t want anyone else to censor what I say or face legal persecution. They all know who I am and if they think I’m breaking any laws, they can come at me with the Justice system and I’ll take my day in court.

    I’ll also share what they do on the internet, but websites would probably be told to censor that too.

    We don’t have liberty when others are held responsible for us.

  14. At enlightenment thinkers still too for the EU? I mean Voltaire, Rousseau, and that lot nee to be banned.

  15. It will be interesting to see how the UK changes in the next few years.

    Will they pull back from the brink and gradually learn to stand up straight again?

    Will they get worse in a distinctive British pattern?

Please to post comments

Comments are closed.