The Volokh Conspiracy
Mostly law professors | Sometimes contrarian | Often libertarian | Always independent
An online speaker sharply criticizes a person or a business. The speaker has a huge, loyal audience, or the speaker's message strikes a chord with readers who help it go viral. As a result, some tiny fraction—but a large absolute number—of the people who read the message send death threats (or rape threats or the like) to the person being criticized. Can that justify an injunction against continued criticism by the speaker?
This, I think, is part of what's going on with the Barley House order and with the Eron Gjoni/Zoe Quinn/Gamergate case. The orders didn't just ban threats, or libel; rather, they banned all further speech (or at least all further social media speech) by the speakers about the people whom they had criticized, e.g.,
Defendants and all persons in active concert with the Defendants … [are barred from] publishing on social media platforms any statements, videos, or images concerning Plaintiffs, their employees, their related entities, and their patrons.
They were thus clearly unconstitutional under existing First Amendment law; even if preventing harm to reputation can justify injunctions against libel, they can't justify injunctions against all speech (which would include constitutionally protected opinion and accurate factual statements). But my guess is that the judge felt that such categorical bans were necessary in order to prevent a different kind of harm—repeated threats, or perhaps other kinds of misconduct (such as hack attacks).
Now traditional First Amendment law would view such situations through the lens of the incitement exception: If speech is (1) specifically intended to and (2) likely to (3) produce imminent lawless conduct among listeners—i.e., such conduct in the coming minutes, hours or perhaps a few days, as opposed to "at some indefinite future time"—then it is indeed unprotected. But there will rarely be any evidence to prove a specific intent to encourage people to send death threats or engage in other crimes. (Mere knowledge of such a risk isn't enough.) And that would be especially so as to intent to encourage people to act imminently. "Evil person X is planning on giving a speech in Los Angeles tonight; do what you can, readers, to stop that from happening" might qualify. "X is an evil person, because he's a fascist / he's an America-hater / he's corrupt / he tried to beat me up" would not.
As I noted in my Barley House post, consider NAACP v. Claiborne Hardware Co. (1982), where the organizers of a 1960s boycott of white-owned stores in Port Gibson, Miss., demanded that black customers stop shopping at those stores. The organizers stationed "store watchers" outside the stores to take down the names of black shoppers who were not complying with the boycott. Those names were then read aloud in local churches and printed in leaflets that were distributed to other black residents. Some of the noncomplying shoppers were targeted for criminal conduct for refusing to go along with the boycott. "The testimony concerning four incidents convincingly demonstrates that they occurred because the victims were ignoring the boycott. In two cases, shots were fired at a house; in a third, a brick was thrown through a windshield; in the fourth, a flower garden was damaged." "The evidence concerning four other incidents is less clear, but again it indicates that an unlawful form of discipline was applied to certain boycott violators"; these four included two beatings and another incident of shots fired into a house.
Yet the court held that these activities were protected by the First Amendment, despite the backdrop of violence and the attempt to use social ostracism to pressure black shoppers to forgo their legal rights to shop at white-owned stores. Though "petitioners admittedly sought to persuade others to join the boycott through social pressure and the 'threat' of social ostracism," the court held, "speech does not lose its protected character … simply because it may embarrass others or coerce them into action." Both financial liability for such speech and an injunction against the speech were unconstitutional, the court concluded.
But should this rule be changed? Threats against targets of criticism have indeed seemingly become more common as a result of the Internet. It's not just that Internet speakers have a broad audience—so do newspapers. Rather, it seems to me, there are several causes:
- Most importantly, online threats are a much easier form of criminal retaliation than pre-Internet actions—vandalism, physical attacks or even threatening phone calls. Such threats can be made very quickly, with no real planning, travel time or risk of personal physical retaliation by the target, and with only a tiny bit of research (finding the target's website, finding the target's Twitter handle in order to tweet a message with an @reference that the target is likely to see, and so on). Unlike threatening phone calls, they can easily be made anonymously, in a way that would take a lot of law enforcement effort to pierce. And people can quickly hear of others who are making threats and may feel relatively safe in piling on.
- Many online speakers have unusually loyal readers, with whom the speakers have connected more viscerally than the typical newspaper report, newspaper columnist or even TV host. There is thus a larger fraction of readers who are willing to lash out against the targets of the criticism.
- Many online speakers frame things in more emotionally arousing ways than the typical newspaper report does—or, if they don't (Gjoni's initial post, for instance, wasn't particularly outraged or vitriolic), some of their readers may redistribute the posts accompanied with emotionally arousing comments of their own. Such a tone may lead a few readers to be more likely to send threats; and recall that, to get a large volume of threats, all it takes is for a tiny percentage of the readers to react this way.
- And, of course, the greater volume of online speech increases the total number of incidents in which this can happen.
Should this lead to greater latitude for injunctions against speech that seems to be prompting threats or other misconduct?
I don't think so, and here's why.
1. To begin with, much legitimate, important criticism can lead a few readers to misbehave. Whatever you may think of the net neutrality debates, for instance, surely criticisms of Ajit Pai and other backers of the repeal of net neutrality is constitutionally protected. Can that change simply because Pai has apparently gotten death threats, as has Rep. John Katko (R-N.Y.)? Can criticism of the hunter who killed Cecil the lion be suppressed because the hunter had been getting death threats? Surely not, even if some such threats have happened, and it's clear that follow-on criticism is likely to lead to more. Should harsh criticism of the police be suppressed if there is evidence that some people who heard them were energized to shoot at police officers as a result? Again, that can't be so.
But that just shows, it seems to me, that speech can't be suppressed just because it foreseeably leads to misconduct by some listeners, especially very cheap (for them) misconduct such as electronically sending a threat. Virtually any subject that arouses some people's emotions—strikes, crime, police abuse, net neutrality, animal rights, abortion and much more—can yield some such threats.
2. Nor can I see a principled, administrable way of distinguishing really valuable criticism that we tolerate despite the risk that some will act badly after hearing it from criticism that it's okay to suppress because of that risk. (I set aside the existing First Amendment exceptions, such as for defamation—knowingly, recklessly or sometimes negligently false statements of fact that injure a person's reputation—or for intentional incitement of imminent crime.)
True, in ordinary life we can distinguish credible, thoughtful critics from buffoons and ranters. But the First Amendment law can't distinguish the speech of serious people from the speech of fools, and say that opinions (or accurate factual statements) said by one class are protected when, said by the other class, they are unprotected. Nor can judges be trusted, I think, to distinguish "fair" from "unfair" criticism, or righteous indignation from unjustified vitriol. See, e.g., Terminiello v. Chicago (1949), Cohen v. California (1971) and many other such cases, which I think are quite correct in holding that speech that appeals to emotions, even in harsh, vulgar or hateful ways, is fully protected.
3. Finally, say that we did have a rule that, for instance, a business (such as Barley House) can get an injunction against criticism when the criticism has supposedly led to anonymous death threats. And say a business is targeted by a wave of legitimate condemnation, prompted by an allegation—even an accurate allegation—that the business's employees did racist things, or groped female patrons, or mistreated animals, or what have you. And say that this condemnation has not led to threats, but the business knows that, in order to get an injunction against the condemnation, it has to show such threats.
What would stop the business from clandestinely posting the anonymous threats itself, in order to get an excuse to get the injunction? (I'm not at all saying that the Barley House deliberately did this; I'm just pointing out the possible consequences for the future of allowing the injunction on this theory in the Barley House case.) After all, the threats are anonymous. The business is seeking an emergency injunction in a civil case, where there obviously won't be enough time to investigate whether the threats were actually made or were just planted. Even if the business had called the police, the police will often lack the resources to quickly investigate anonymous emails that might look as though they've been sent from other states or even other countries. The defendant speakers will often lack the money or inclination to pay tens or hundreds of thousands of dollars to lawyers who can conduct such an investigation.
The business's owners will have a huge incentive to cheat on this score, since the criticism might be costing them a huge amount of revenue. And they will have very little reason to fear being found out.
Now maybe you might think that worrying about this sort of chicanery would be far-fetched. But do you think it would be likely that people would forge court orders in order to get Google to deindex Web pages that criticize them? Well, they did, to the tune of more than 65 such orders. How about suing fake defendants in order to get such a court order? One company seems to have been running a business doing this for its clients (possibly without its clients' knowledge); it appears that they filed at least 25 such suits.
How about enlisting people who will falsely claim that they are the authors of libelous posts, in order to get a court order adjudging the posts to be libelous? About 20 cases that I've seen fit this pattern. Plus there are libel lawsuits aimed at deindexing a Web page containing a newspaper article, but which are brought not against the newspaper (which would presumably defend the lawsuit) but against an anonymous commenter who posted on that page—with some indications that the comments were deliberately posted to provide an excuse for the lawsuit. (For more on all this, see these posts.)
So imagine your (or your allies') speech, critical of some politician, business owner, professor or anyone else, is suppressed because some readers have supposedly sent death threats because of it—or maybe it wasn't them, but your target (or someone your target hired) faked the threats; neither you nor the judge can really know. For all its limitations, the current speech-protective First Amendment rule avoids that.