Bari Weiss Twitter Files Reveal Systematic 'Blacklisting' of Disfavored Content
Twitter employees have indicated that shadow banning—at least by some definitions—is both real and common.
The second installment of the Twitter Files—disclosures about politically motivated content suppression on the platform—has been released, this time from independent journalist Bari Weiss.
"A new #TwitterFiles investigation reveals that teams of Twitter employees build blacklists, prevent disfavored tweets from trending, and actively limit the visibility of entire accounts or even trending topics—all in secret, without informing users," writes Weiss.
1. A new #TwitterFiles investigation reveals that teams of Twitter employees build blacklists, prevent disfavored tweets from trending, and actively limit the visibility of entire accounts or even trending topics—all in secret, without informing users.
— Bari Weiss (@bariweiss) December 9, 2022
The previous installment, released by independent journalist Matt Taibbi, focused on the confused and chaotic decision on the part of Twitter executives to offer a "hacked materials" rationale for suppressing the New York Post's Hunter Biden laptop story; as such, the files mostly provided more evidence of what was already fairly well-known.
The Weiss installment, on the other hand, offers significant evidence of something that many people merely suspected was taking place: wholesale blacklisting of Twitter accounts that were perceived to be causing harm.
Weiss provides several examples of ways in which the platform limited the reach of various high-profile users: Jay Bhattacharya, a Stanford University professor of medicine who opposed various COVID-19 mandates and lockdowns, was on a "trends blacklist," which meant that his tweets would not appear in the trending topics section; right-wing radio host Dan Bongino landed on a "search blacklist," which meant that he did not show up in searches; and conservative activist and media personality Charlie Kirk was slapped with a "do not amplify" label. At no point did anyone at Twitter communicate to these individuals that their content was being limited in such a manner.
These actions, of course, sound a lot like "shadow banning," which is the theory that Twitter surreptitiously restricts users' content, even in cases where the platform has not formally issued a ban or suspension. For years, various figures on the right and contrarian left have complained that the reach of their tweets had substantially and artificially diminished for nonobvious reasons, contrary to the stated claims of top-level Twitter staffers who steadfastly asserted: "We do not shadow ban."
This claim depends upon how the term is defined. To be clear, Twitter has publicly admitted that it suppresses tweets that "detract from the conversation," though the platform's plan was to eventually move toward a policy of informing users about suppression efforts—a move that never took place.
If shadow banning is defined as secretly making a user's content utterly undiscoverable, even by visiting his own page, then it's technically true that there is no shadow banning. (Twitter seems to have defined the term that way.) But when most people complain about shadow banning, they are objecting to a secretive process of restricting, hiding, and limiting content in general, without informing users about why these decisions have been made. Under this definition, it's crystal clear that Twitter engages in shadow banning.
As a private company, Twitter was obviously within its rights to do this, but users also have the right to be furious. The lack of transparency is startling, and the rationale behind the policy is wholly contrary to a culture of free speech. According to Weiss, previous Twitter Head of Trust and Safety Yoel Roth admitted in internal Slack conversations that "the hypothesis underlying much of what we've implemented is that if exposure to, e.g., misinformation directly causes harm, we should use remediations that reduce exposure, and limiting the spread/virality of content is a good way to do that."
That's a very fraught proposition that would appear to justify an extreme amount of censorship. As always, it's important to keep in mind that social media moderators, misinformation beat reporters, federal health advisers, and national intelligence officials have all failed to correctly distinguish actual misinformation from true information—the New York Post laptop story being just one prominent example.
Show Comments (329)