Reason.com - Free Minds and Free Markets
Reason logo Reason logo
  • Latest
  • Magazine
    • Current Issue
    • Archives
    • Subscribe
    • Crossword
  • Video
  • Podcasts
    • All Shows
    • The Reason Roundtable
    • The Reason Interview With Nick Gillespie
    • The Soho Forum Debates
    • Just Asking Questions
    • The Best of Reason Magazine
    • Why We Can't Have Nice Things
  • Volokh
  • Newsletters
  • Donate
    • Donate Online
    • Donate Crypto
    • Ways To Give To Reason Foundation
    • Torchbearer Society
    • Planned Giving
  • Subscribe
    • Reason Plus Subscription
    • Print Subscription
    • Gift Subscriptions
    • Subscriber Support

Login Form

Create new account
Forgot password

Twitter

Twitter Implementing European-Style Hate Speech Bans

Will it stop toxic behavior or just encourage more demands for censorship?

Scott Shackford | 9.25.2018 2:15 PM

Share on FacebookShare on XShare on RedditShare by emailPrint friendly versionCopy page URL
Media Contact & Reprint Requests
Jack Dorsey
Ron Sachs/SIPA/Newscom

Twitter's leadership announced this morning that it is broadening its bans on "hateful" conduct to try to cut down on "dehumanizing" behavior.

The social media platform already bans (or attempts to ban, anyway) speech that targets an individual on the basis of race, sex, sexual orientation, and a host of other characteristics. Now it intends to crack down on broader, non-targeted speech that dehumanizes classes of people for these characteristics.

Here's how the company's blog post describes the new rules:

You may not dehumanize anyone based on membership in an identifiable group, as this speech can lead to offline harm.

Definitions:

Dehumanization: Language that treats others as less than human. Dehumanization can occur when others are denied of human qualities (animalistic dehumanization) or when others are denied of human nature (mechanistic dehumanization). Examples can include comparing groups to animals and viruses (animalistic), or reducing groups to their genitalia (mechanistic).

Identifiable group: Any group of people that can be distinguished by their shared characteristics such as their race, ethnicity, national origin, sexual orientation, gender, gender identity, religious affiliation, age, disability, serious disease, occupation, political beliefs, location, or social practices.

Directly under that rule, they ask for feedback. If you find this definition vague, you can let them know. They actually ask for examples of how this rule could be misapplied to punish "speech that contributes to a healthy conversation." Feel free to fill them in.

As a private platform, Twitter can decide that it does not want to make space for speech it finds unacceptable. Newspapers and other media outlets have often declined to run letters to the editor or otherwise provide platforms for speech that uses such "dehumanizing" language. It's their right to do so.

To the extent that there's a "but" here, it's about how toxic the political discussion on Twitter has already become. A large number of people actively try to get other people banned for saying things they don't like, flopping and shrieking like pro soccer players at every piece of criticism they don't like in hopes of drawing out a red card from a ref. If you add to the reasons that Twitter will censor tweets and shut down accounts, surely you'll just increase the volume of people shrieking at Twitter CEO Jack Dorsey demanding that he and Twitter do something.

Also, while this new rule is a product of the creepily-named Trust and Safety Council that Twitter organized in 2016, its language echoes the broad anti–hate speech laws of the European Union and United Kingdom. This morning Andrea O'Sullivan noted that the European Union is attempting to regulate what online companies permit and forbid. It's a lot harder to see what Twitter is doing as a voluntary reaction to consumer pressure when we know that there is additional governmental efforts to try to force them to censor users. And it won't just be ordinary citizens who use this rule to yell at Twitter and demand they shut down speech they don't like. Politicians certainly will as well.

Both Twitter's blog post and Wired's coverage of the rule change point to the research of Susan Benesch of The Dangerous Speech Project as an inspiration for the new rule. Yet while one might think an organization that says certain types of speech are actually dangerous would be pro-censorship, that's not really what the group is about.

While The Dangerous Speech Project does say that "inhibiting" dangerous, dehumanizing speech is one way to prevent the spread of messages meant to encourage violence and hatred toward targeted groups, that's not what the group is actually encouraging. It says outright that efforts to fight "dangerous" speech "must not infringe upon freedom of speech since that is a fundamental right." It adds that "when people are prevented from expressing their grievances, they are less likely to resolve them peacefully and more likely to resort to violence."

The Dangerous Speech Project calls instead for engaging and countering bad speech with good speech. In fact, last year Benesch co-wrote an article specifically warning against online Twitter mobs that attempt to shame or retaliate against people in real life for the things that they've said, even when those things are full-on racist. When naming-and-shaming is used as a social tactic to suppress speech, she notes, it often ends up with the majority oppressing minorities. And besides, it doesn't really work:

Shaming is a familiar strategy for enforcing social norms, but online shaming often goes farther, reaching into a person's offline life to inflict punishment, such as losing a job. Tempting though it is, identifying and punishing people online should not become the primary method of regulating public discourse, since this can get out of hand in various ways. It can inflict too much pain, sometimes on people who are mistakenly identified—and in all cases it is unlikely to change the targets' minds favorably.

It's a little odd to see this group's work being used to justify suppressing people's tweets.

Start your day with Reason. Get a daily brief of the most important stories and trends every weekday morning when you subscribe to Reason Roundup.

This field is for validation purposes and should be left unchanged.

NEXT: Trump's Messy Messaging on Iran

Scott Shackford is a policy research editor at Reason Foundation.

TwitterHate SpeechCensorshipSocial MediaEuropean UnionFree SpeechTechnology
Share on FacebookShare on XShare on RedditShare by emailPrint friendly versionCopy page URL
Media Contact & Reprint Requests

Show Comments (243)

Latest

Quebec's Dairy Farmers Are Blocking Free Trade in Canada

Stuart J. Smyth | 6.21.2025 7:00 AM

The Criminal Justice System Was Found Guilty in the Karen Read Trial

Billy Binion | 6.21.2025 6:30 AM

Obama Adviser Jason Furman on Biden, Neoliberalism, and Keynesian Economics

Nick Gillespie | From the July 2025 issue

The Federal Government Owns Too Much Land. Selling It Helps Rural Communities.

Jack Nicastro | 6.20.2025 5:37 PM

A Judge's Order Freeing Mahmoud Khalil Is Yet Another Loss for the Trump Administration's Immigration Agenda

C.J. Ciaramella | 6.20.2025 4:41 PM

Recommended

  • About
  • Browse Topics
  • Events
  • Staff
  • Jobs
  • Donate
  • Advertise
  • Subscribe
  • Contact
  • Media
  • Shop
  • Amazon
Reason Facebook@reason on XReason InstagramReason TikTokReason YoutubeApple PodcastsReason on FlipboardReason RSS

© 2024 Reason Foundation | Accessibility | Privacy Policy | Terms Of Use

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.

r

Do you care about free minds and free markets? Sign up to get the biggest stories from Reason in your inbox every afternoon.

This field is for validation purposes and should be left unchanged.

This modal will close in 10

Reason Plus

Special Offer!

  • Full digital edition access
  • No ads
  • Commenting privileges

Just $25 per year

Join Today!