The Volokh Conspiracy
Mostly law professors | Sometimes contrarian | Often libertarian | Always independent
Court Blocks N.Y. Law Mandating Posting of "Hateful Conduct" Policies by Social Media Platforms (Including Us)
From Volokh v. James, decided today by Judge Andrew L. Carter, Jr. (S.D.N.Y.):
"Speech that demeans on the basis of race, ethnicity, gender, religion, age, disability, or any other similar ground is hateful; but the proudest boast of our free speech jurisprudence is that we protect the freedom to express 'the thought that we hate.'" Matal v. Tam (2017).
With the well-intentioned goal of providing the public with clear policies and mechanisms to facilitate reporting hate speech on social media, the New York State legislature enacted N.Y. Gen. Bus. Law § 394-ccc ("the Hateful Conduct Law" or "the law"). Yet, the First Amendment protects from state regulation speech that may be deemed "hateful" and generally disfavors regulation of speech based on its content unless it is narrowly tailored to serve a compelling governmental interest. The Hateful Conduct Law both compels social media networks to speak about the contours of hate speech and chills the constitutionally protected speech of social media users, without articulating a compelling governmental interest or ensuring that the law is narrowly tailored to that goal. In the face of our national commitment to the free expression of speech, even where that speech is offensive or repugnant, Plaintiffs' motion for preliminary injunction, prohibiting enforcement of the law, is GRANTED….
The Hateful Conduct Law does not merely require that a social media network provide its users with a mechanism to complain about instances of "hateful conduct". The law also requires that a social media network must make a "policy" available on its website which details how the network will respond to a complaint of hateful content. In other words, the law requires that social media networks devise and implement a written policy—i.e., speech.
For this reason, the Hateful Conduct Law is analogous to the state mandated notices that were found not to withstand constitutional muster by the Supreme Court and the Second Circuit: NIFLA and Evergreen. In NIFLA, the Supreme Court found that plaintiffs—crisis pregnancy centers opposing abortion—were likely to succeed on the merits of their First Amendment claim challenging a California law requiring them to disseminate notices stating the existence of family- planning services (including abortions and contraception). The Court emphasized that "[b]y compelling individuals to speak a particular message, such notices 'alte[r] the content of [their] speech.'" Likewise, in Evergreen, the Second Circuit held that a state-mandated disclosure requirement for crisis pregnancy centers impermissibly burdened the plaintiffs' First Amendment rights because it required them to "affirmatively espouse the government's position on a contested public issue…."
Similarly, the Hateful Conduct Law requires a social media network to endorse the state's message about "hateful conduct". To be in compliance with the law's requirements, a social media network must make a "concise policy readily available and accessible on their website and application" detailing how the network will "respond and address the reports of incidents of hateful conduct on their platform." N.Y. Gen. Bus. Law § 394-ccc(3). Implicit in this language is that each social media network's definition of "hateful conduct" must be at least as inclusive as the definition set forth in the law itself. In other words, the social media network's policy must define "hateful conduct" as conduct which tends to "vilify, humiliate, or incite violence" "on the basis of race, color, religion, ethnicity, national origin, disability, sex, sexual orientation, gender identity or gender expression." N.Y. Gen. Bus. Law § 394-ccc(1)(a). A social media network that devises its own definition of "hateful conduct" would risk being in violation of the law and thus subject to its enforcement provision….
Clearly, the law, at a minimum, compels Plaintiffs to speak about "hateful conduct". As Plaintiffs note, this compulsion is particularly onerous for Plaintiffs, whose websites have dedicated "pro-free speech purpose[s]", which likely attract users who are "opposed to censorship". Requiring Plaintiffs to endorse the state's definition of "hateful conduct", forces them to weigh in on the debate about the contours of hate speech when they may otherwise choose not to speak. In other words, the law, "deprives Plaintiffs of their right to communicate freely on matters of public concern" without state coercion.
Additionally, Plaintiffs have an editorial right to keep certain information off their websites and to make decisions as to the sort of community they would like to foster on their platforms. It is well-established that a private entity has an ability to make "choices about whether, to what extent, and in what manner it will disseminate speech…" These choices constitute "editorial judgments" which are protected by the First Amendment. In Pacific Gas & Electric Co. v. Public Utilities Commission of California, the Supreme Court struck down a regulation that would have forced a utility company to include information about a third party in its billing envelopes because the regulation "require[d] appellant to use its property as a vehicle for spreading a message with which it disagrees."
Here, the Hateful Conduct Law requires social media networks to disseminate a message about the definition of "hateful conduct" or hate speech—a fraught and heavily debated topic today. Even though the Hateful Conduct Law ostensibly does not dictate what a social media website's response to a complaint must be and does not even require that the networks respond to any complaints or take down offensive material, the dissemination of a policy about "hateful conduct" forces Plaintiffs to publish a message with which they disagree. Thus, the Hateful Conduct Law places Plaintiffs in the incongruous position of stating that they promote an explicit "pro-free speech" ethos, but also requires them to enact a policy allowing users to complain about "hateful conduct" as defined by the state….
The policy disclosure at issue here does not constitute commercial speech [as to which compelled disclosures are more easily upheld] …. The law's requirement that Plaintiffs publish their policies explaining how they intend to respond to hateful content on their websites does not simply "propose a commercial transaction". Nor is the policy requirement "related solely to the economic interests of the speaker and its audience." Rather, the policy requirement compels a social media network to speak about the range of protected speech it will allow its users to engage (or not engage) in. Plaintiffs operate websites that are directly engaged in the proliferation of speech …..
Because the Hateful Conduct Law regulates speech based on its content, the appropriate level of review is strict scrutiny. To satisfy strict scrutiny, a law must be "narrowly tailored to serve a compelling governmental interest." A statute is not narrowly tailored if "a less restrictive alternative would serve the Government's purpose."
Plaintiffs argue that limiting the free expression of protected speech is not a compelling state interest and that the law is not narrowly tailored. While Defendant concedes that the Hateful Conduct Law may not be able to withstand strict scrutiny, she maintains that the state has a compelling interest in preventing mass shootings, such as the one that took place in Buffalo.
Although preventing and reducing the instances of hate-fueled mass shootings is certainly a compelling governmental interest, the law is not narrowly tailored toward that end. Banning conduct that incites violence is not protected by the First Amendment, but this law goes far beyond that. {For speech to incite violence, "there must be 'evidence or rational inference from the import of the language, that [the words in question] were intended to produce, and likely to produce, imminent' lawless action." The Hateful Conduct law's ban on speech that incites violence is not limited to speech that is likely to produce imminent lawless action.}
While the OAG Investigative Report does make a link between misinformation on the internet and the radicalization of the Buffalo mass shooter, even if the law was truly aimed at reducing the instances of hate-fueled mass shootings, the law is not narrowly tailored toward reaching that goal. It is unclear what, if any, effect a mechanism that allows users to report hateful conduct on social media networks would have on reducing mass shootings, especially when the law does not even require that social media networks affirmatively respond to any complaints of "hateful conduct". In other words, it is hard to see how the law really changes the status quo—where some social media networks choose to identify and remove hateful content and others do not….
The court also concluded that the law was facially overbroad, as well as being unconstitutional as applied to Rumble, Locals, and me:
As the Court has already discussed, the law is clearly aimed at regulating speech. Social media websites are publishers and curators of speech, and their users are engaged in speech by writing, posting, and creating content. Although the law ostensibly is aimed at social media networks, it fundamentally implicates the speech of the networks' users by mandating a policy and mechanism by which users can complain about other users' protected speech.
Moreover, the Hateful Conduct law is a content based regulation. The law requires that social media networks develop policies and procedures with respect to hate speech (or "hateful conduct" as it is recharacterized by Defendant). As discussed, the First Amendment protects individuals' right to engage in hate speech, and the state cannot try to inhibit that right, no matter how unseemly or offensive that speech may be to the general public or the state. Thus, the Hateful Conduct Law's targeting of speech that "vilifi[es]" or "humili[ates"] a group or individual based on their "race, color, religion, ethnicity, national origin, disability, sex, sexual orientation, gender identity or gender expression", N.Y. Gen. Bus. Law § 394-ccc(1)(a), clearly implicates the protected speech of social media users.
This could have a profound chilling effect on social media users and their protected freedom of expression. Even though the law does not require social media networks to remove "hateful conduct" from their websites and does not impose liability on users for engaging in "hateful conduct", the state's targeting and singling out of this type of speech for special measures certainly could make social media users wary about the types of speech they feel free to engage in without facing consequences from the state. This potential wariness is bolstered by the actual title of the law— "Social media networks; hateful conduct prohibited" —which strongly suggests that the law is really aimed at reducing, or perhaps even penalizing people who engage in, hate speech online. As Plaintiffs noted during oral argument, one can easily imagine the concern that would arise if the government required social media networks to maintain policies and complaint mechanisms for anti-American or pro-American speech. Moreover, social media users often gravitate to certain websites based on the kind of community and content that is fostered on that particular website. Some social media websites—including Plaintiffs'—intentionally foster a "pro-free speech" community and ethos that may become less appealing to users who intentionally seek out spaces where they feel like they can express themselves freely.
The potential chilling effect to social media users is exacerbated by the indefiniteness of some of the Hateful Conduct Law's key terms. It is not clear what the terms like "vilify" and "humiliate" mean for the purposes of the law. While it is true that there are readily accessible dictionary definitions of those words, the law does not define what type of "conduct" or "speech" could be encapsulated by them. For example, could a post using the hashtag "BlackLivesMatter" or "BlueLivesMatter" be considered "hateful conduct" under the law? Likewise, could social media posts expressing anti-American views be considered conduct that humiliates or vilifies a group based on national origin? It is not clear from the face of the text, and thus the law does not put social media users on notice of what kinds of speech or content is now the target of government regulation.
Accordingly, because the Hateful Conduct Law appears to "reach[…] a substantial amount of constitutionally protected conduct", the Court finds that Plaintiffs have demonstrated a likelihood of success on their facial challenges under the First Amendment.
The court disagreed, however, with our argument that the law violated 47 U.S.C. § 230:
The Communications Decency Act provides that "[n]o provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider." … [T]he Hateful Conduct Law shows that Plaintiffs' argument is without merit. The law imposes liability on social media networks for failing to provide a mechanism for users to complain of "hateful conduct" and for failure to disclose their policy on how they will respond to complaints. The law does not impose liability on social media networks for failing to respond to an incident of "hateful conduct", nor does it impose liability on the network for its users own "hateful conduct". The law does not even require that social media networks remove instances of "hateful conduct" from their websites. Therefore, the Hateful Conduct Law does not impose liability on Plaintiffs as publishers in contravention of the Communications Decency Act.
Many thanks to FIRE—and in particular Darpana Sheth, Daniel Ortner, and Jay Diaz—as well as local counsel Barry Covert (of Lipsitz Green Scime Cambria LLP) for representing me in this case.
UPDATE: Jonathan Turley comments on the case.
Editor's Note: We invite comments and request that they be civil and on-topic. We do not moderate or assume any responsibility for comments, which are owned by the readers who post them. Comments do not represent the views of Reason.com or Reason Foundation. We reserve the right to delete any comment for any reason at any time. Comments may only be edited within 5 minutes of posting. Report abuses.
Please
to post comments
Cool story brah.
Next time your readers want to complain about some social media company not enforcing their policies "right" (which is always determined by the reader, naturally), I hope you remind them that you personally worked to overturn a law requiring such companies to clearly state what those policies are.
Or to put it another way... conservatives have whined that "social media" is inconsistent and vague about moderation. This was an attempt to add sunlight. You fought it, and you won. So please, rub it in the noses of Reason readers whenever you can.
We must compel something. Protected speech is something. Therefore, we must compel protected speech!
Also, this law compelled a "concise" statement of policy. To whatever extent Twitter, Facebook, etc. had concise statements of policy, those statements were incomplete and misleading.
" conservatives have whined"
So you're upset at the concept of clearly defining rules so people can more easily follow them?
Sometimes, yeah. Do you think the government should be able to pass a law telling you that you must have clear, and posted, rules about what your houseguests may and may not say, and if you don't, you can't kick them out for the things they say?
I’d be inclined to agree with you except there is a difference between a personal house offering hospitality and a oligopolistic cartel that controls a service that is vital for certain purposes and that is recognized as such in all situations save one.
PS I'm not necessarily in favor of the specific way this law is formulated.
oligopolistic cartel that controls a service that is vital for certain purposes
We were fine before twitter, and will be fine after. It's not access to water. Or Internet speech for that matter.
Your call to nationalize twitter as a utility is noted, though.
okay I guess you are also against race and gender quotas and nondiscrimination clauses in tech companies then also pushed in large part by government.
“Your call to nationalize twitter as a utility is noted, though.”
Since Twitter is nowhere near the hardright utopia its portrayed as I think having its rules clearly stated is an idea worthy of being considered along with all the other companies given all the other things we force them to do already.
Yep if a homosexual couple is denied a wedding cake, that's a national emergency of course.
Denied a wedding cake by one bakery out of dozens of others which would be happy to provide one.
Exactly.
That's great, except that none of those words describe any of these companies.
How quickly you memory-hole how the cartel blocked Parker.
No. I don’t think the government should have clear posted rules about commenting policies.
The government could though condition section 230 immunity on not only having clear posted policies,but also not violate those policies.
The government requirements need to be viewpoint neutral, but not the websites policies.
I think the government should be able to pass a law telling me that I must have clear and accurate lists of ingredients and other nutritional information on my packaging; or that I must post clear rules for behaviour in the swimming pool; or that I must post clear fire safety instructions in my building which is open to the public. Not sure I see this as being hugely different from those.
‘Or to put it another way… conservatives have whined that “social media” is inconsistent and vague about moderation.’
Conservatives usually complain they’re beeing targeted, censored, silenced, banned, suspended because they’re conservatives. People outside their bubble point out that actually they’re just among many victims of vague and inconsistent moderation. If anything some of them get special consideration, especially if they're high profile with a rabid following, which is also vague and inconsistent moderation.
Congratulations!
Let people decide which platforms to use and let platforms decide who they want to accommodate.
Uh, does this work outside cyber-space too? Like:
"Let people decide which business establishments to patronize, and let business-owners decide whom they want to serve."
(I'd be all for that.)
Congrats!
Rah, Judge Carter!!!!
Seems right to me.
The opinion cites NIFLA and Evergreen, cases where plaintiffs were required to speak particular messages. This law though doesn't constrain the site's policy - it doesn't have to promise any particular outcome, or any action at all. A notice saying "Conduct that vilifies, humiliates, or incites violence against a group or class of persons may be reported by email to webmaster@social.network. No action will be taken on such reports" meets the requirement. I tend to agree with the state that it is more a labeling or disclosure requirement than a compelled endorsement of the government's speech.
Just to be clear, the opinion (and our argument to the court) stressed that the statute was focused on speech of a particular viewpoint, mandating only that sites publish policies related to "hateful" speech. The opinion doesn't resolve the different question of whether sites may be mandated to generally describe their moderation policies of all sorts.
Note also that the statute provides that the social media network's policy must indicate "how such social media network will respond and address the reports of incidents of hateful conduct on their platform" (emphasis added), and adds, "Nothing in this section shall be construed ... to add to or increase liability of a social media network for anything other than the failure to provide a mechanism for a user to report to the social media network any incidents of hateful conduct on their platform and to receive a response on such report." That seems to me to require us to respond, and not just to have a blanket policy that simply says "we will take no action based on your complaints." (The court, though, had no occasion to consider that argument, because it concluded that the requirement of publishing a policy related to hateful conduct was itself an impermissible speech compulsion.)
Eugene,
Although, if a website was designed to generate an automatic reply, the burden seems de minimus (although there do seem to be other First Amendment concerns).
"Thank you for your report/complaint. This website supports a robust free speech environment, and therefore will not act in response to this or to any other report."
Compelled speech is still compelled speech, and setting up an email auto-responder is a good way to cause spam (which does have legal consequences).
“the burden seems de minimus”
OK, let’s look at the very first federal law the Supreme Court held unconstitutional on 1st Amendment grounds:
Corliss Lamont wanted to receive what the government defined as foreign Communist propaganda in the mail. To get such material delivered, Lamont simply had to fill out a form at the post office that he consented to receive the material. In 1965, the Supreme Court said that the government’s requirement, no matter how minor it would seem, put a burden on 1st Amendment rights and was contrary to the 1st Amendment.
And that was just filling out *one* form.
Yep making people jump through numerous hoops for guns is just fine, because "guns!"
Lamont didn't hold that being required to fill out a government form was impermissible compelled speech. The issue was that failure to fill out the form would result in the denial of the First Amendment right to receive speech, that was the burden the Court found unconstitutional.
I was focusing on the de minimus part, not which end of the 1st Amendment they were using.
This of course is the attitude that got Reason on the list of the top 10 most dangerous websites.
Letting people have their say is dangerous, of course so is getting up and walking out the door in the morning.
I am assuming you are referencing the Global Disinformation Initiative listing of “dangerous” sites. Here are the sites that GDI lists as least “dangerous”:
And the “Least risky sites” when it comes to spewing disinformation and propaganda? Who are they? Try not to laugh. In the order the DGI [sic] lists them, they are as follows:
NPR AP News The New York Times ProPublica Insider USA Today The Washington Post BuzzFeed News Wall Street Journal HuffPost
From a story in The American Spectator:https://spectator.org/biden-targets-the-american-spectator-in-conservative-blacklisting/
Want to talk about dangerous?
What would happen if you had simply stated that your policy is to be completely arbitrary and capricious and that you will do whatever you damn well please, which may range between telling the complainer to pound sand to removing the objected post?
It requires a response, but I see no reason why the response can't simply repeat that no action will be taken.
To be sure, applying precedents for compelled disclosures rather than compelled viewpoints may fail to change the result, e.g. Riley v. National Federation of Blind of NC.
Good. Now let's see when the courts stop ignoring Bruen and denying every challenge based on "standing."
It must be written again that the state generally has no business on whether or not the owner of a web site refuses to remove content.
Content that “vilif[ies, humiliat[es], or incite[s] violence” are not exceptions (though of course if the incitement meets the Brandenburg standard, the author of the incitement is subject to lawsuit and prosecution)
As such, a web site refusing to remove “there is no God”, “the Roman Catholic Church is the one true church”, or “Sunni Islam is the real Islam” is not the state’s business, even if it violates the web site’s own Terms of Service.
The NY statute can be found at N.Y. Gen. Bus. Law § 394-CCC.
Got to be honest, this seems like small beans - it's effectively just their TOS for feck's sake - compared to the banning of TikTok in US states.
Why is there no penalty for contemptuous laws that the state knows are unconstitutional? It's much easier for the state to pass illegal laws than for individuals to defend themselves.
From a quick scan of the opinion, it doesn’t look like it came up, and I’d understand why no one was particularly interested in talking about it, but still… Under “likely success on the merits,” I wonder if there might have been a standing issue, since, to a naive reader, the site’s masthead, heuristics, and nav structure seem to contextualize your role as a sort of editor of one section of Reason.com. If the determining the real party in interest (e.g., author, publisher, controller of forum) in questions of online speech and publisher liability permits that kind of of piercing of the front-end nav veil, many assumptions in the current debate might have to be rethought. What’s to keep a circle of Twitter friends, who, say, read each other’s feeds through a third-party reader, from declaring themselves the real authors of speech, proprietors of the forum, etc.
Reminds me of the days when you had to pick out the threads of your conversations from all the green, glowing cross-talk on an IRC channel. Text is generated by the passionate and assembled by the interested, but a lot of weird stuff goes on in-between. And publisher of record gets wonky when not only is there no record, but no concept of record.
Perhaps.
Mr. D.
Update: Hey, you've added an edit button. Fantastic. Couldn't resist.