Facebook, Twitter, Google, and Microsoft have agreed to the European Union's Code of Conduct on "illegal hate speech," designed to ensure "that online platforms do not offer opportunities for illegal online hate speech to spread virally." The code, while legally non-binding, commits these tech companies to extensive review and remove requirements for any online content reported as hate speech. It stems from the March 24 terrorist attacks in Brussels, after which the E.U. Justice and Home Affairs Council declared that it would work with tech companies "to counter terrorist propaganda" and develop a "code of conduct against hate speech online" by June.
The document defines hate speech broadly: "all conduct publicly inciting to violence or hatred directed against a group of persons or a member of such a group defined by reference to race, colour, religion, descent or national or ethnic origin." Under the code, "online intermediaries and social media platforms" must have in place "clear and effective processes to review notifications regarding illegal hate speech on their services," review "the majority" of notifications within 24 hours, and remove or disable access to any content determined to be illegal hate speech.
The companies also agree to post community rules or guidelines "clarifying that they prohibit the promotion of incitement to violence and hateful conduct," have regular powwows with officials and law enforcement in E.U. member states, and report on the impact of their efforts "to the High Level Group on Combating Racism, Xenophobia and all forms of intolerance by the end of 2016."
Google's Public Policy and Government Relations Director, Lie Junius, said the company is "pleased to work with the [European Commission] to develop co- and self-regulatory approaches to fighting hate speech online."
And that is the silver lining here, from a libertarian perspective: at least Facebook, Google, et al. entered into this regulatory scheme semi-voluntarily, although voluntary is always a blurry concept when it comes to agreements with governing bodies. Maybe the whole code is just good PR for these companies, maybe it's a step toward a social-media-to-police pipeline for all manner of unpopular speech. We'll see.
In addition to simply agreeing to remove threatening or violence-inciting speech, the code stipulates that tech companies, "recognizing the value of independent counter speech against hateful rhetoric and prejudice," will also "aim to continue their work in identifying and promoting independent counter-narratives, new ideas and initiatives and supporting educational programs that encourage critical thinking."