The Volokh Conspiracy
Mostly law professors | Sometimes contrarian | Often libertarian | Always independent
From the Twitch blog today (first emphasis in original, second added):
We will now enforce against serious offenses that pose a substantial safety risk to the Twitch community, even if these actions occur entirely off Twitch. Examples of these behaviors include:
- Deadly violence and violent extremism
- Terrorist activities or recruiting
- Explicit and/or credible threats of mass violence (i.e. threats against a group of people, event, or location where people would gather).
- Leadership or membership in a known hate group
- Carrying out or acting as an accomplice to non-consensual sexual activities and/or sexual assault
- Sexual exploitation of children, such as child grooming and solicitation/distribution of underage sexual materials
- Actions that would directly and explicitly compromise the physical safety of the Twitch community, such as threatening violence at a Twitch event
- Explicit and/or credible threats against Twitch, including Twitch staff
These behaviors represent some of the most egregious types of physical and psychological harm, but we understand that this list is not inclusive of all types of harassment and abuse.
Taking action against misconduct that occurs entirely off our service is a novel approach for both Twitch and the industry at large, but it's one we believe—and hear from you—is crucial to get right.
This doesn't define "hate group," but consider Twitch's definition of hateful conduct, which is already broad enough to include speech that "perpetuative[s] negative stereotypes" based on, for instance, "immigration status" or "religion"; claims of "mental" or "moral deficiencies," including again based on religion; calls for "political, economic, and social exclusion/segregation, based on a protected characteristic, including age"; "mocking the event/victims or denying the occurrence of well-documented hate crimes, or denying the existence of documented acts of mass murder/genocide against a protected group"; or "[e]ncouraging the use of or generally endorsing sexual orientation conversion therapy." ("We do, however, allow discussions on certain topics such as immigration policy, voting rights for non-citizens, and professional sports participation as long as the content is not directly denigrating based on a protected characteristic.") Presumably if you belong to a group, even entirely off Twitch, that endorses such "hateful conduct," then you are just like terrorists, rapists, or child molesters, and will be potentially subject to the blacklist.
Of course, Twitch (owned by Amazon) is a private company, and is not barred by law from blacklisting people based on whether they are now, or have ever been, members of this or that group. But it's important for the public to see just how broadly Big Tech wants to assert control over people's speech and association—a desire that of course is unlikely to stay limited to gaming platforms.
As I mentioned, for instance, I think we should demand that any "vaccine passport" or "vaccine verification" systems that the government requires or facilitates must have a "common carrier" requirement (by law, regulation, or contract): Otherwise, whoever runs the system can use its power to blacklist people who belong to the wrong groups, or venues that are seen as run by the wrong groups (or even ones that, for instance, allow rallies or concerts by the wrong groups).
Likewise, as individuals and companies we need to see what we can do to try to fight supply chain political risk, whether by supporting competition to Big Tech (such as Parler), supporting open-access models, or doing other such things. I, for one, do not welcome our new Big Tech overlords.