Recent "hate speech" investigations in European countries have been spawned by homily remarks by a Spanish Cardinal who opposed "radical feminism," a hyperbolic hashtag tweeted by a U.K. diversity coordinator, a chant for fewer Moroccan immigrants to enter the Netherlands, comments from a reality TV star implying Scottish people have Ebola, a man who put a sign in his home window saying "Islam out of Britian," French activists calling for boycotts of Israeli products, an anti-Semitic tweet sent to a British politician, a Facebook post referring to refugees to Germany as "scum," and various other sorts of so-called "verbal radicalism" on social media.
One might consider any or all of these comments distasteful, but Americans (recent trends on college campuses notwithstanding) tend to appreciate that for a free-speech right to truly exist, we must severely limit the types of speech—true threats, slander, etc.—that don't deserve protection from government censorship and potential prosecution. Not so in European Union (E.U.) member countries, many of which have laws against any language that "insults," "offends," "degrades," "expresses contempt," or "incites hatred" based on certain protected traits like race, religion, or sexual orientation. As Nick Gillespie has put it, "hate speech" is like the secular equivalent of blasphemy.
On Monday, V?ra Jourová, the E.U. Commissioner for Justice, Consumers and Gender Equality, gave a speech stressing the importance of such laws and calling for even more intense policing of so-called hate speech. (Just to be clear, by "hate speech" we are not talking about things like threats or criminal harassment.) "My top priority is to ensure that the Framework Decision on Combatting Racism and Xenophobia is correctly translated into the national criminal codes and enforced, so that perpetrators of online hate speech are duly punished," Jourová said.
The commissioner offered a characteristically European rationale for the imposition: only by government censorship of free expression can free expression flourish.
"In recent years, we have seen messages of extremism and intolerance spread around the globe like wildfire" and "we need to stand united against this growing phenomenon," said Jourová. "Our commitment is to deliver change so that people do not need to live in fear, and to ensure that the internet remains a place of free and democratic expression, where European values and laws are respected."
"The spread of illegal hate speech online not only distresses the people it targets," she continued, "it also affects those who speak up for freedom, tolerance and non-discrimination in our society. If left unattended, the fear of intimidation can keep opinion makers, journalists and citizens away from social media platforms."
It's easy to see how folks might buy Jourová's idea that allowing intolerant speech online "means a shrinking digital space for freedom of expression." We've all heard about public figures or controversial thinkers who were allegedly hounded off of social media by online criticism, with its harsh, vulgar, and sometimes violent tones. And what is gained by such uncivil opprobrium? By sanctioning not only violent threats and ongoing harassment but also speech that serves no purpose but to troll, denigrate, or spread bigotry, we can usher in a more welcoming environment for all sorts of ideas and speakers online…
Or so the thinking goes, anyway. But the fatal flaw in this conceit is pretending there's some bright line between desirable, pro-social speech and speech that merely incites offense, fear, or feelings of negativity.
Of course, many of us object on pure principle to censoring the latter forms of speech. But setting aside classical-liberal notions, there are still plenty of good arguments against EU-style speech policing. For one, it makes distinctions between legal and illegal speech based not only on what is being said but who is saying it and whom it's said to.
For instance, a few years ago Slate's William Saletan complained that countries were (in practice) criminalizing insults against Jews but not against Muslims. Now, a more common complaint is that speech critical of Islam, Islamic customs, or refugees form Muslim countries gets monitored and punished more than any other speech.
There's also the fact that officials can't possibly go after everyone who insults someone's religion on the internet, disparages Syrian migrants, espouses non-egalitarian views about the sexes, or expresses empathy for some hated group. Thus police and political elites tend to concentrate on those who are either the most visible (celebrities, opposition leaders) or deviate most from the intellectual status-quo. The result is speech policing that leaves alone plenty of people who fly under the radar or direct their hate in the right direction, while denying protection to the sorts of ideas and speakers who need protection most.
Yes, allowing a "right to offend" may mean more vulgar and inflammatory online environments. But there are plenty of non-governmental and less draconian ways to address problems that arise from this than imprisoning people for saying dumb, mean, or unpopular things. Technological tools, business practices, and social shaming have all been known to work—and to work more effectively than police playing an endless, expensive game of whack-a-mole with online speech.
How could officials ever expect to put a dent in online intolerance through individual criminal prosecutions? I'm not sure that's actually their point—rather, high-profile and individual "hate speech" investigations are intended as a morality play put on by government to teach its desired values and ideologies.
— V?ra Jourová (@VeraJourova) September 26, 2016
Jourová more or less admitted as much, crowing that new European Commission initiatives seek "to step up" the spreading of "counter-narratives" that give "due space to the messages that oppose hate speech and respect our values." One way it's doing this is by issuing an IT code of conduct, agreed to in May by companies like Facebook, Twitter, and Google. You can find all sorts of details (and official justifications for it) here.
As Jourová explained Monday, the code "means that notifications for removal of illegal hate speech have to be assessed and relevant action has to be taken [by IT companies], in the majority of cases, in less than 24 hours." These polices must be "checked not only against the companies' terms of service but also against the law."
The commissioner insisted that free speech was alive and well in the E.U., and no one was denying "the right 'to offend, to shock or to disturb the State or any part of the population.'" Speech rights do not, however, "include the right to incite violence and hatred," Jourová said. "Speech inciting violence or hatred is illegal. It is a crime."
Yet spreading "hate" isn't like punching someone in the face. Hatred, unlike violence, is an entirely internal and subjective thing. Thus criminalizing the incitement of hatred necessarily involves banning or censoring speech merely because it winds up offending, shocking, or disturbing some individual or the state.
Jourová comes close to admitting this, too, stating that while "many cases of online hate speech, notably those inciting violence," will be easy for online companies to recognize and deal with, in other cases "it may be more difficult to decide whether a speech is illegal or not." This is the major issue with E.U.-country speech rules—how does one determine conclusively whether an off-color comment is merely uncivil/sexist/racist/whatever or whether its criminally actionable?
Yet Jourová waves away the entire tension in one sentence, acknowledging that business leaders already "make difficult legal compliance decisions" in many areas, "such as tax, accountancy or workers' rights cases" and "ensuring compliance with hate speech law is no different."