The Volokh Conspiracy
Mostly law professors | Sometimes contrarian | Often libertarian | Always independent
Facebook is adding the following to its "hate speech policy":
Do not post:
Content attacking concepts, institutions, ideas, practices, or beliefs associated with protected characteristics, which are likely to contribute to imminent physical harm, intimidation or discrimination against the people associated with that protected characteristic. Facebook looks at a range of signs to determine whether there is a threat of harm in the content. These include but are not limited to: content that could incite imminent violence or intimidation; whether there is a period of heightened tension such as an election or ongoing conflict; and whether there is a recent history of violence against the targeted protected group. In some cases, we may also consider whether the speaker is a public figure or occupies a position of authority.
This provision will appear in a section of the Community Standards devoted to policies that require additional context in order to enforce (for more about this group of policies, see here, under the heading "Sharing Additional Policies Publicly"). Specialized teams will look at a range of signals, as noted in the text quoted above, to determine whether there is a threat of harm posed by the content.
By way of example, burning a national flag or religious texts, caricatures of religious figures, or criticism of ideologies may be a demonstration of political or personal expression, but may also lead to potential imminent violence in certain contexts. Previously, this content would have been left up; now, with context, we have created a framework of analysis for determining when it poses an imminent risk of harm and might be taken down.
Protected characteristics are "race, ethnicity, national origin, disability, religious affiliation, caste, sexual orientation, sex, gender identity and serious disease"; so it seems like Facebook may block:
- Criticisms of religious institutions and belief systems, if Facebook concludes they seem "likely to contribute to imminent … discrimination" against the targeted religious group.
- Criticisms of a foreign country or government (China, the Palestinian Authority, in principle Israel), if Facebook concludes they seem "likely to contribute to imminent … discrimination" against its citizens or people who share an ethnicity with it.
- Criticisms of pro-transgender-rights or pro-gay-rights beliefs, if if Facebook concludes they "likely to contribute to imminent … discrimination" against sexual minorities.
- Criticisms of feminism, if Facebook concludes they seem "likely to contribute to imminent … discrimination" against women.
- Criticisms of pro-disability-rights positions, if Facebook concludes they seem "likely to contribute to imminent … discrimination" against the disabled.
And of course the proposal contemplates that this would be applied to election campaigns, even when candidates for office are debating these very issues, and even when swaying a small percentage of the electorate can change the outcome. Which leads me to ask again, "Whose rules should govern how Americans speak with other Americans?," on a platform "has become a virtually indispensable medium for political discourse, and especially so in election periods"? (Naturally, citizens of other countries may reasonably ask similar questions for their own countries as well.) Right now, the answer seems to be "an immensely rich and powerful corporation."