Twitter Files: FBI, DHS Reported Tweets for Election Misinformation
Content moderators had "weekly confabs" with law enforcement officials, reports Matt Taibbi.
Law enforcement officials in the Department of Homeland Security (DHS) and Federal Bureau of Investigation (FBI) met regularly with top content moderators at Twitter during the 2020 presidential election, independent journalist Matt Taibbi revealed in the latest dispatches from the Twitter Files.
Taibbi describes an "erosion of standards within the company" that took place between October 2020 and January 6th, 2021, as Twitter's Trust & Safety team, headed by Yoel Roth and Vijaya Gadde, took a more active—and, according to Taibbi, arbitrary—role in moderating election-related content. Taibbi contrasts their moderation decisions with calls made by "Safety Operations," a broader team "whose staffers used a more rules-based process for addressing issues like porn, scams, and threats."
15. There was at least some tension between Safety Operations – a larger department whose staffers used a more rules-based process for addressing issues like porn, scams, and threats – and a smaller, more powerful cadre of senior policy execs like Roth and Gadde.
— Matt Taibbi (@mtaibbi) December 9, 2022
Notably, Twitter's moderation decisions during this time period increasingly relied on input from the FBI and DHS. In internal Slack conversations, Twitter policy director Nick Pickles floated the idea of publicly admitting that the company's misinformation policies were partly based on feedback from experts in law enforcement; he eventually decided just to call them "partnerships."
Other Slack messages suggest that Roth met regularly, even weekly, with the FBI and DHS. The FBI also reported tweets for spreading election misinformation, sometimes prompting Twitter to take action.
This is the aspect of content moderation that should provoke the greatest concern from the general public. While Twitter's opaque and inconsistent policies are undoubtedly enraging, they are a private company's terms of service; ultimately, users can (and should) complain, and encourage new leadership—i.e. Elon Musk—to change course, but there isn't a strong public policy connection.
Dictates from law enforcement, on the other hand, are absolutely matters of public policy. Is it proper for agents of the state to encourage private entities to suppress misinformation, even as national political figures excoriate these entities for not moderating more aggressively? The First Amendment might have something to say about that.
Show Comments (239)