Free Speech

Mark Zuckerberg Speaks Out Against Removing Holocaust-Denying Speech

Indeed, Facebook shouldn't set itself up as the arbiter of historical truth (or scientific truth or moral beliefs) -- and doing that even as to Holocaust denial would just yield pressure for much more.

|The Volokh Conspiracy |

Facebook's Mark Zuckerberg has been getting flak the last couple of days because of his arguments against removing Holocaust-denying speech from Facebook. He was asked in the interview why Facebook wouldn't take down conspiracy theory sites, such as those that say the Sandy Hook shooting didn't happen, and he brought up Holocaust denial as an analogy:

Okay. "Sandy Hook didn't happen" is not a debate. It is false. You can't just take that down?

I agree that it is false.

I also think that going to someone who is a victim of Sandy Hook and telling them, "Hey, no, you're a liar" — that is harassment, and we actually will take that down. But overall, let's take this whole closer to home…

I'm Jewish, and there's a set of people who deny that the Holocaust happened.

I find that deeply offensive. But at the end of the day, I don't believe that our platform should take that down because I think there are things that different people get wrong. I don't think that they're intentionally getting it wrong, but I think-

In the case of the Holocaust deniers, they might be, but go ahead.

It's hard to impugn intent [he may have meant "impute intent" -EV] and to understand the intent. I just think, as abhorrent as some of those examples are, I think the reality is also that I get things wrong when I speak publicly. I'm sure you do. I'm sure a lot of leaders and public figures we respect do too, and I just don't think that it is the right thing to say, "We're going to take someone off the platform if they get things wrong, even multiple times." (Update: Mark has clarified these remarks here: "I personally find Holocaust denial deeply offensive, and I absolutely didn't intend to defend the intent of people who deny that.")

What we will do is we'll say, "Okay, you have your page, and if you're not trying to organize harm against someone, or attacking someone, then you can put up that content on your page, even if people might disagree with it or find it offensive." But that doesn't mean that we have a responsibility to make it widely distributed in News Feed. I think we, actually, to the contrary- ….

I think Zuckerberg had it largely right even before the "clarification":

[1.] My sense is that many people who repeat historical falsehoods—whether silly conspiracy theories about recent events, or broader historical nonsense—sincerely believe them.

Obviously, I can't point to statistical studies (nor can people on the other side), since one can't just do a survey asking, "When you deny the Holocaust, are you deliberately lying?" But based on my sense of human nature, people genuinely believe all sorts of bunk. And especially given that expressing such views will often yield massive social opprobrium, the people who say them are often true-believer fanatics, not rational liars.

True, some of the speakers might be deliberately lying for political reasons—e.g., to use even knowingly false Holocaust denial as a means of implying that Jews or Israelis have gotten undeserved sympathy or slack. And even many of those who sincerely believe the denial may be especially open to such beliefs because of their preexisting anti-Semitism. But I agree that it's hard to tell a speaker's intent, and some speakers may in fact be sincerely (even if unreasonably) getting this wromg rather than "intentionally getting it wrong."

Now of course one might argue that intent doesn't matter here, and certain speech should be removed from Facebook regardless of the speaker's intent. I'll turn to that shortly.

But many (not all, but many) calls to suppress "fake news," "hate speech," and the like stress that they're not trying to restrict sincere discussion or suppress honest mistakes, but are aimed at deliberate lies. (In American law, that sort of argument goes all the way back to the Sedition Act of 1798.) Indeed, that is often a way of suggesting that the proposed restrictions, government-imposed or otherwise, are modest and narrow, and indeed that they would leave it possible for anything to be discussed so long as it's done without knowing falsehoods. I think Zuckerberg was right to point out that even Holocaust deniers (or Sandy Hook deniers or whoever else) are sometimes not intentionally lying, and that it's very hard for Facebook to sort the liars from the fools.

[2.] But why not have Facebook just remove Holocaust denial because it's wrong? After all, Facebook is not the government; it's not bound by the First Amendment.

Because of course it's not just about Holocaust denial. As we saw from the interview, we see the same calls for suppressing conspiracy theories about current events. We can equally easily see calls for suppressing speech that the influential view as "conspiracy theories," but might prove to have at least some (and perhaps much) truth to it. Likewise for history: If Facebook censors Holocaust denial, many will ask: Why not speech that is perceived as denying the Ottoman World-War-I-era genocide of Armenians, or dissenting views about whether the killings were deliberate genocide? Leading historian Bernard Lewis was actually fined by a French court for arguing that the killing happened, but that—unlike with the Holocaust during World War II—it was not part of a deliberate campaign of extermination by the Turks.

Likewise, many will ask: What about speech that conveys what they see as falsehoods about white settlers' actions towards American Indians? Speech that conveys what they see as falsehoods about Poles' behavior during World War II? What about what they see as dangerous falsehoods about science, such as about the causes and social effects of homosexuality, or about how we should characterize people's desire to be treated as a gender that is different from their biological sex?

And what about speech that might have even more direct harmful effects? Many will ask: What about speech that spreads falsehoods about vaccination, or about what foods to eat or not to eat, in a way that can cause physical illness and death? Some condemn Holocaust denial because it may lead to more anti-Semitism and from there to physical harms to Jews—but bad health choices can of course cause physical harms, too. What about speech that they view as "global warming denialism"?

Facebook would then have to choose. It could come up with a long list of forbidden viewpoints about history, science, morality, and so on. Or it could treat Holocaust denial as somehow special—which of course many will perceive as treating Jews and the historic crimes against them as special—which I think would only exacerbate the offense felt by groups that think (and the historical accounts that are understandably especially important to them) that they are not being treated equally.

Censorship breeds censorship envy, and that's true of private suppression by massively influential platforms such as Facebook as well as of governmental censorship. And censorship envy can lead not just to more censorship, but more feelings of insult and injury as a result of decisions not to censor. (For similar reasons, I don't think that such businesses should generally ban supposed "hate speech," but I think the problem is even more severe when they start to police historical claims and not just advocacy of behavior or slurs and insults.)

[3.] Now of course these judgments about what is true and what is false are commonplace when it comes to individual publications. The New York Times—or for that matter Reason or The Volokh Conspiracy—isn't going to evenhandedly publish submissions from everyone regardless of their views. Even sites that deliberately try to let a hundred flowers bloom (and not just in a Maoist sense) distinguish flowers from weeds. They might make errors in their editorial judgments, but their readers expect them to exercise some quality control; indeed, that is one reason why readers read them. Indeed, we expect publications not just to exclude outright falsehoods, but also a wide range of material that they view as insufficently thoughtful, unclear, too one-sided (for those publications that specialize in more objective presentations), or in general written by people who don't know what they're talking about.

But I think Facebook, Twitter, and the like promote themselves not as publications, but as platforms—as places for the public to speak. They don't screen for quality or credentials, nor do people look to them (as to the pages, rather than the newsfeeds) as reliable sources or reliable screeners.

They also include vastly more speech than a newspaper does, on a much greater range of topics, in a much greater set of languages, dialects, and genres (e.g., parody, humor, sarcasm, and the like, often not obvious to outsiders). And they also often have a much larger share of their particular market than do typical news or opinion outlets (especially these days).

This doesn't make them legally obligated to allow speech they disapprove of, and I'm not saying they should be so legally obligated. But I do think that there are many more costs, both to public debate and to the companies themselves, from their setting themselves up as arbiters of what speech is historically false, or, as they would then be pressured to do, scientifically false or morally wrong. And on balance, I think, the costs—the costs of thus encouraging such massive and massively powerful economic organizations to also exercise power over political debate, historical discussion, scientific arguments, and more—much exceed the benefits.