Twitter

Twitter Unveils 'Birdwatch,' a New Platform Where Users Fact-Check Tweets

An interesting science experiment.

|

On Monday, Twitter debuted a new feature that will allow users to add notes to other people's tweets. It's a user-generated fact-checking system, and it's called Birdwatch.

In the pilot phase, these notes will be available on a separate website rather than Twitter itself—though the tentative plan is to eventually add the feature to the main platform.

Nick Pickles, director of public policy strategy and development at Twitter, told Reason the goal is to "move the policy debate about content moderation beyond a framing of deciding whether things are true or false or not."

"People on Twitter desire to be part of the conversation," he said.

Twitter gave me a preview and demonstration of Birdwatch prior to its launch and solicited my feedback. The concept is intriguing: Notes will be written by Twitter users who have signed up for Birdwatch. The idea is to provide clarification; in the pilot program, participants could use a note to explain why a tweet is inaccurate, for instance. If another user thinks a note is wrong, they can add a note of their own. Participants will rate the helpfulness of other Birdwatchers' notes, and eventually, Twitter will be able to prioritize the visibility of notes that were written by users who are highly rated.

"These notes are being intentionally kept separate from Twitter for now, while we build Birdwatch and gain confidence that it produces context people find helpful and appropriate," said Keith Coleman, vice president of products at Twitter. "Additionally, notes will not have an effect on the way people see Tweets or our system recommendations."

It's an interesting approach to a difficult problem. Twitter has struggled with how to handle factually inaccurate tweets, such as those concerning the 2020 presidential election. The strategy Twitter settled on was manually adding warning labels to these tweets. The issue there is that it positions the platform to play the role of fact-checker, and people might reasonably see evidence of political bias in which tweets generate warning labels. Users might also assume—wrongly—that if a tweet does not contain a warning label, it has been deemed accurate by the platform.

The beauty of Birdwatch is that the fact-checking is provided by other users, rather than by Twitter itself. Under this system, there should be no complaints that Twitter has fact-checked X tweet but not Y tweet—that's up to the users, and anyone who doesn't like a note is free to object to it.

These notes don't have to be mere true/false statements, either. Birdwatchers will be encouraged to provide clarity and nuance, and link to articles that support their argument. Again, the accountability comes via this system itself: Users can rate the helpfulness of notes, or add their own.

Here's a video that explains how it works:

 

Fact-checking users' obviously joke statements about whale conspiracies would probably not be the best use of Birdwatch, though as the video makes clear, the system will prompt note-adders to consider whether a tweet is satirical. (And at least in theory, habitually adding pointless notes to joke tweets would perhaps earn a user an "unhelpful" rating.)

In my conversations with Twitter personnel, they freely admitted that they couldn't predict exactly how this will all work out. But they are optimistic that the Twitter community in the aggregate is capable of accurately adjudicating the veracity of tweets. At best, this could outmode the more obnoxious fact-checking done by Twitter itself. At worst—well, it's an interesting science experiment.

In the pilot phase, signups are limited to Twitter users who have not recently violated the terms of service. Those eligible can apply here.