Muscular Content Moderation Is a Really Bad Idea That's Impossible To Implement Well
Be afraid as more journalists and politicians start calling for stronger policing of online speech.
The calls for more and tougher social media content moderation—including lifetime bans against malefactors—keep getting louder and louder, especially from politicians. Here's a new story from The Atlantic's Alexis C. Madrigal, who recounts how social-media providers increasingly began to call their services platforms to avoid responsibility for the content that users post while profiting off hate speech and other forms of risible expression.
That's a very debatable history of social media, but let's leave that aside for the time being. Madrigal's conclusion is a good summary of what is fast becoming the new conventional wisdom among the smart set, including legislators. Put simply, the new consensus holds that no social media company should ever make money or grow its reach by allowing speech or content that we all agree is patently false, dangerous, or offensive.
Facebook drew on that sense of being "just a platform" after conservatives challenged what they saw as the company's liberal bias in mid-2016. Zuckerberg began to use—at least in public—the line that Facebook was, "a platform for all ideas."
But that prompted many people to ask: What about awful, hateful ideas? Why, exactly, Facebook should host them, algorithmically serve them up, or lead users to groups filled with them?
These companies are continuing to make their platform arguments, but every day brings more conflicts that they seem unprepared to resolve. The platform defense used to shut down the why questions: Why should YouTube host conspiracy content? Why should Facebook host provably false information? Facebook, YouTube, and their kin keep trying to answer, We're platforms! But activists and legislators are now saying, So what? "I think they have proven—by not taking down something they know is false—that they were willing enablers of the Russian interference in our election," Nancy Pelosi said in the wake of the altered-video fracas. [Facebook refused to take down a video edited to make the Speaker of the House appear drunk.]
The corollary to maximalist content moderation is that it will be easy and simple to implement because, well, aren't these online types all super-geniuses? But implementation is proving to be extremely difficult and stupid, for all sorts of reasons and with all sorts of unintentionally comic results. Just yesterday, for instance, Twitter suspended the account of David Neiwert, a progressive journalist who is the author of Alt-America: The Rise of the Radical Right in the Age of Trump, a critical book on the topic. His crime? Festooning his Twitter homepage with an image from Alt-Right's cover image, which includes "Ku Klux Klan hoods atop a series of stars from the American flag." Better yet, Twitter froze the parody account The Tweet of God for the following message:
If gay people are a mistake, they're a mistake I've made hundreds of millions of times, which proves I'm incompetent and shouldn't be relied upon for anything.
— God (@TheTweetOfGod) June 11, 2019
Twitter said the tweet violated its rules against "hateful conduct."
What the fuck, Twitter.
Seriously.
What the fuck.
What the fuck is this. pic.twitter.com/NQwcRXFWg7— God (@TheTweetOfGod) June 11, 2019
The account has since been restored and it's worth noting that God is keeping the jokes coming:
https://twitter.com/TheTweetOfGod/status/1138833705345011712
Similar issues inevitably arise whenever social media platforms (that word!) try to create more robust versions of content moderation that inevitably rely on automated processes that can't detect intent, much less irony or humor. Last week, for instance, YouTube announced that it was changing its terms of service and would now ban "videos alleging that a group is superior in order to justify discrimination, segregation or exclusion." As Robby Soave noted, among the first batch of people blocked by YouTube was former Reason intern Ford Fischer, whose channel News2Share documents and exposes extremism.
Within minutes of @YouTube's announcement of a new purge it appears they caught my outlet, which documents activism and extremism, in the crossfire.
I was just notified my entire channel has been demonetized. I am a journalist whose work there is used in dozens of documentaries. pic.twitter.com/HscG2S4dWh
— Ford Fischer (@FordFischer) June 5, 2019
This way madness lies. But it's not simply an implementation problem. While the doctrinaire libertarian in me is quick to point out that Facebook, Twitter, et al have every legal right to run their platforms however they want to, the free-speech enthusiast in me is equally quick to say not so fast. Contra The Atlantic's Madrigal, it's not always easy to define "awful, hateful ideas" or to agree on what sorts of videos present, insane conspiracies versus unconventional ideas. I suspect very few people really want Rep. Nancy Pelosi (D–Calif.)—or President Donald Trump, or any politician—to decide what is false and thus censorable.
But we shouldn't be so quick to allow private companies to take on that role either.
Is Facebook really a better place now that Alex Jones and Louis Farrakhan are gone? Probably, but given all the gray areas that come with defining who and what is ban-worthy (should, say, radical diet groups get bounced?), the superior solution is minimal content moderation, restricted to true threats and clear criminal behavior such as fraud, and improved tools that allow users to tailor what they see and experience. Certainly, Facebook is better for leaving up the altered Nancy Pelosi video while calling attention to its doctored state and allowing a "deepfake" video of Mark Zuckerberg to remain on the platform (that word again).
We are plainly asking too much of social media right now, even as we are blaming it for problems that long predate its rise. Rather than trying to force it to respond to all our contradictory demands, it's far better for us collectively and individually to develop new forms of media literacy that allow us to separate the wheat from the chaff. Individual definitions will vary and many heated conversations will follow, which is exactly as it should be in a complex world that resists even our best-faith attempts to regiment it to our most-deeply held beliefs.
Show Comments (40)