The Volokh Conspiracy

Mostly law professors | Sometimes contrarian | Often libertarian | Always independent

Free Speech

N.Y. AG Appeals, to Defend Law Mandating Posting of "Hateful Conduct" Policies by Social Media Platforms (Including Us)

Volokh v. James going to the Second Circuit.


As expected, the New York Attorney General is appealing the decision that preliminarily enjoined enforcement of the law. I'm glad to see that, because I expect the Second Circuit will affirm the District Court decision, and thus set a precedent that will be binding in the Second Circuit and likely quite influential in other circuits as well.

[* * *]

From Volokh v. James, decided [Feb. 14] by Judge Andrew L. Carter, Jr. (S.D.N.Y.):

"Speech that demeans on the basis of race, ethnicity, gender, religion, age, disability, or any other similar ground is hateful; but the proudest boast of our free speech jurisprudence is that we protect the freedom to express 'the thought that we hate.'" Matal v. Tam (2017).

With the well-intentioned goal of providing the public with clear policies and mechanisms to facilitate reporting hate speech on social media, the New York State legislature enacted N.Y. Gen. Bus. Law § 394-ccc ("the Hateful Conduct Law" or "the law"). Yet, the First Amendment protects from state regulation speech that may be deemed "hateful" and generally disfavors regulation of speech based on its content unless it is narrowly tailored to serve a compelling governmental interest. The Hateful Conduct Law both compels social media networks to speak about the contours of hate speech and chills the constitutionally protected speech of social media users, without articulating a compelling governmental interest or ensuring that the law is narrowly tailored to that goal. In the face of our national commitment to the free expression of speech, even where that speech is offensive or repugnant, Plaintiffs' motion for preliminary injunction, prohibiting enforcement of the law, is GRANTED….

The Hateful Conduct Law does not merely require that a social media network provide its users with a mechanism to complain about instances of "hateful conduct". The law also requires that a social media network must make a "policy" available on its website which details how the network will respond to a complaint of hateful content. In other words, the law requires that social media networks devise and implement a written policy—i.e., speech.

For this reason, the Hateful Conduct Law is analogous to the state mandated notices that were found not to withstand constitutional muster by the Supreme Court and the Second Circuit: NIFLA and Evergreen. In NIFLA, the Supreme Court found that plaintiffs—crisis pregnancy centers opposing abortion—were likely to succeed on the merits of their First Amendment claim challenging a California law requiring them to disseminate notices stating the existence of family- planning services (including abortions and contraception). The Court emphasized that "[b]y compelling individuals to speak a particular message, such notices 'alte[r] the content of [their] speech.'" Likewise, in Evergreen, the Second Circuit held that a state-mandated disclosure requirement for crisis pregnancy centers impermissibly burdened the plaintiffs' First Amendment rights because it required them to "affirmatively espouse the government's position on a contested public issue…."

Similarly, the Hateful Conduct Law requires a social media network to endorse the state's message about "hateful conduct". To be in compliance with the law's requirements, a social media network must make a "concise policy readily available and accessible on their website and application" detailing how the network will "respond and address the reports of incidents of hateful conduct on their platform." N.Y. Gen. Bus. Law § 394-ccc(3). Implicit in this language is that each social media network's definition of "hateful conduct" must be at least as inclusive as the definition set forth in the law itself. In other words, the social media network's policy must define "hateful conduct" as conduct which tends to "vilify, humiliate, or incite violence" "on the basis of race, color, religion, ethnicity, national origin, disability, sex, sexual orientation, gender identity or gender expression." N.Y. Gen. Bus. Law § 394-ccc(1)(a). A social media network that devises its own definition of "hateful conduct" would risk being in violation of the law and thus subject to its enforcement provision….

Clearly, the law, at a minimum, compels Plaintiffs to speak about "hateful conduct". As Plaintiffs note, this compulsion is particularly onerous for Plaintiffs, whose websites have dedicated "pro-free speech purpose[s]", which likely attract users who are "opposed to censorship". Requiring Plaintiffs to endorse the state's definition of "hateful conduct", forces them to weigh in on the debate about the contours of hate speech when they may otherwise choose not to speak. In other words, the law, "deprives Plaintiffs of their right to communicate freely on matters of public concern" without state coercion.

Additionally, Plaintiffs have an editorial right to keep certain information off their websites and to make decisions as to the sort of community they would like to foster on their platforms. It is well-established that a private entity has an ability to make "choices about whether, to what extent, and in what manner it will disseminate speech…" These choices constitute "editorial judgments" which are protected by the First Amendment. In Pacific Gas & Electric Co. v. Public Utilities Commission of California, the Supreme Court struck down a regulation that would have forced a utility company to include information about a third party in its billing envelopes because the regulation "require[d] appellant to use its property as a vehicle for spreading a message with which it disagrees."

Here, the Hateful Conduct Law requires social media networks to disseminate a message about the definition of "hateful conduct" or hate speech—a fraught and heavily debated topic today. Even though the Hateful Conduct Law ostensibly does not dictate what a social media website's response to a complaint must be and does not even require that the networks respond to any complaints or take down offensive material, the dissemination of a policy about "hateful conduct" forces Plaintiffs to publish a message with which they disagree. Thus, the Hateful Conduct Law places Plaintiffs in the incongruous position of stating that they promote an explicit "pro-free speech" ethos, but also requires them to enact a policy allowing users to complain about "hateful conduct" as defined by the state….

The policy disclosure at issue here does not constitute commercial speech [as to which compelled disclosures are more easily upheld] …. The law's requirement that Plaintiffs publish their policies explaining how they intend to respond to hateful content on their websites does not simply "propose a commercial transaction". Nor is the policy requirement "related solely to the economic interests of the speaker and its audience." Rather, the policy requirement compels a social media network to speak about the range of protected speech it will allow its users to engage (or not engage) in. Plaintiffs operate websites that are directly engaged in the proliferation of speech …..

Because the Hateful Conduct Law regulates speech based on its content, the appropriate level of review is strict scrutiny. To satisfy strict scrutiny, a law must be "narrowly tailored to serve a compelling governmental interest." A statute is not narrowly tailored if "a less restrictive alternative would serve the Government's purpose."

Plaintiffs argue that limiting the free expression of protected speech is not a compelling state interest and that the law is not narrowly tailored. While Defendant concedes that the Hateful Conduct Law may not be able to withstand strict scrutiny, she maintains that the state has a compelling interest in preventing mass shootings, such as the one that took place in Buffalo.

Although preventing and reducing the instances of hate-fueled mass shootings is certainly a compelling governmental interest, the law is not narrowly tailored toward that end. Banning conduct that incites violence is not protected by the First Amendment, but this law goes far beyond that. {For speech to incite violence, "there must be 'evidence or rational inference from the import of the language, that [the words in question] were intended to produce, and likely to produce, imminent' lawless action." The Hateful Conduct law's ban on speech that incites violence is not limited to speech that is likely to produce imminent lawless action.}

While the OAG Investigative Report does make a link between misinformation on the internet and the radicalization of the Buffalo mass shooter, even if the law was truly aimed at reducing the instances of hate-fueled mass shootings, the law is not narrowly tailored toward reaching that goal. It is unclear what, if any, effect a mechanism that allows users to report hateful conduct on social media networks would have on reducing mass shootings, especially when the law does not even require that social media networks affirmatively respond to any complaints of "hateful conduct". In other words, it is hard to see how the law really changes the status quo—where some social media networks choose to identify and remove hateful content and others do not….

The court also concluded that the law was facially overbroad, as well as being unconstitutional as applied to Rumble, Locals, and me:

As the Court has already discussed, the law is clearly aimed at regulating speech. Social media websites are publishers and curators of speech, and their users are engaged in speech by writing, posting, and creating content. Although the law ostensibly is aimed at social media networks, it fundamentally implicates the speech of the networks' users by mandating a policy and mechanism by which users can complain about other users' protected speech.

Moreover, the Hateful Conduct law is a content based regulation. The law requires that social media networks develop policies and procedures with respect to hate speech (or "hateful conduct" as it is recharacterized by Defendant). As discussed, the First Amendment protects individuals' right to engage in hate speech, and the state cannot try to inhibit that right, no matter how unseemly or offensive that speech may be to the general public or the state. Thus, the Hateful Conduct Law's targeting of speech that "vilifi[es]" or "humili[ates"] a group or individual based on their "race, color, religion, ethnicity, national origin, disability, sex, sexual orientation, gender identity or gender expression", N.Y. Gen. Bus. Law § 394-ccc(1)(a), clearly implicates the protected speech of social media users.

This could have a profound chilling effect on social media users and their protected freedom of expression. Even though the law does not require social media networks to remove "hateful conduct" from their websites and does not impose liability on users for engaging in "hateful conduct", the state's targeting and singling out of this type of speech for special measures certainly could make social media users wary about the types of speech they feel free to engage in without facing consequences from the state. This potential wariness is bolstered by the actual title of the law— "Social media networks; hateful conduct prohibited" —which strongly suggests that the law is really aimed at reducing, or perhaps even penalizing people who engage in, hate speech online. As Plaintiffs noted during oral argument, one can easily imagine the concern that would arise if the government required social media networks to maintain policies and complaint mechanisms for anti-American or pro-American speech. Moreover, social media users often gravitate to certain websites based on the kind of community and content that is fostered on that particular website. Some social media websites—including Plaintiffs'—intentionally foster a "pro-free speech" community and ethos that may become less appealing to users who intentionally seek out spaces where they feel like they can express themselves freely.

The potential chilling effect to social media users is exacerbated by the indefiniteness of some of the Hateful Conduct Law's key terms. It is not clear what the terms like "vilify" and "humiliate" mean for the purposes of the law. While it is true that there are readily accessible dictionary definitions of those words, the law does not define what type of "conduct" or "speech" could be encapsulated by them. For example, could a post using the hashtag "BlackLivesMatter" or "BlueLivesMatter" be considered "hateful conduct" under the law? Likewise, could social media posts expressing anti-American views be considered conduct that humiliates or vilifies a group based on national origin? It is not clear from the face of the text, and thus the law does not put social media users on notice of what kinds of speech or content is now the target of government regulation.

Accordingly, because the Hateful Conduct Law appears to "reach[…] a substantial amount of constitutionally protected conduct", the Court finds that Plaintiffs have demonstrated a likelihood of success on their facial challenges under the First Amendment.

The court disagreed, however, with our argument that the law violated 47 U.S.C. § 230:

The Communications Decency Act provides that "[n]o provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider." … [T]he Hateful Conduct Law shows that Plaintiffs' argument is without merit. The law imposes liability on social media networks for failing to provide a mechanism for users to complain of "hateful conduct" and for failure to disclose their policy on how they will respond to complaints. The law does not impose liability on social media networks for failing to respond to an incident of "hateful conduct", nor does it impose liability on the network for its users own "hateful conduct". The law does not even require that social media networks remove instances of "hateful conduct" from their websites. Therefore, the Hateful Conduct Law does not impose liability on Plaintiffs as publishers in contravention of the Communications Decency Act.

Many thanks to FIRE—and in particular Darpana Sheth, Daniel Ortner, and Jay Diaz—as well as local counsel Barry Covert (of Lipsitz Green Scime Cambria LLP) for representing me in this case.

UPDATE: Jonathan Turley comments on the case.