Journalists Worried About People Having Conversations on Clubhouse
A person you know might be having an online conversation without a transcriptionist and a fact-checker right now, and we have to stop it.
All good things must come to an end—or at least become the subject of a scoldy Poynter article and a prominent New York Times tech reporter compiling a list of "issues and incidents." In this case, the target is Clubhouse, a nascent audio-only conversation app that's part chatroom, part conference panel, part good happy hour, the kind where you leave feeling like your subway fare wasn't wasted.
The Poynter piece levies complaints and concerns at the app, including that "Clubhouse's design inherently excludes people with certain disabilities" by making "zero affordances" for those who are deaf or hearing-impaired. (Unfortunately, it's fairly impractical to make real-time audio apps both accessible and enjoyable to deaf people.) But Poynter quickly moves on to its more central concern, which is that the app poses a problem for fact-checkers since it "doesn't keep old posts or audio files and doesn't allow users to record conversations." In fact, "there is no way to prove that someone said anything controversial at all," the article notes, citing Grit Daily's Olivia Smith.
This seems like it would be bad for snitches and media industry narcs, of which there are many, but actually quite good for reducing the reach of misinformation, slurs, or controversial statements. Unlike Twitter and Facebook, where biased or untrue red-meaty content regularly gains traction or goes viral, there's a limit to the total distribution and reach of untrue or unsavory words uttered on Clubhouse.
(To be sure, that's not an argument for regulating the social media giants or dismantling Section 230 protections, but it is a useful difference in how the platforms operate.)
Poynter's diatribe ends on a truly special note: After noting that the Chinese government's censors have started cracking down on the app when they realized it had been used by mainland Chinese speakers to communicate with people abroad—the free flow of information across borders is a big no-no for authoritarians—the media ethics outlet asks, "If Xi Jinping's administration isn't ignoring Clubhouse, why should fact-checkers? Why should you?"
https://twitter.com/NellieBowles/status/1360004654470361088?s=20
There are many reasons why this is a facile and absurd statement, one of which is that in a free society, we don't use "what Xi Jinping feels threatened by" as a litmus test for what we ought to crack down on. The Chinese Communist Party (CCP) feels threatened by images of Winnie the Pooh, mentions of the Tiananmen Square massacre, discussions of police brutality, depictions of LGBTQ people, the selling of Bibles (though a tightly limited number are allowed to be produced and distributed), the movies Brokeback Mountain and Memoirs of a Geisha, and one of the best South Park episodes. (Disclosure: The show's creators are friends and supporters of Reason.)
The CCP does not allow citizens to use certain search terms, and certainly doesn't allow them to be on Twitter (or Slack, Instagram, Wikipedia, or Pinterest). CNN and BBC broadcasts are sometimes blacked out in China when censors come across a segment that is unfit for the eyes of the people—unfit in terms of being critical of the regime or exposing the forced labor camps the CCP is shunting Uighur ethnic minorities into in Xinjiang province. It's hard to overstate the lengths to which the CCP will go to prevent Chinese citizens from accessing information and forming their own opinions. All of this is good reason why U.S. regulators need not take a hint from the CCP when deciding which forms of social media to crack down on.
But in the end, content moderation comes for us all, and New York Times tech reporter Taylor Lorenz has started to publish a list of grievances against the app. To be sure, there are plenty of instances of people saying things that are inappropriate, unkind, untrue, racist, sexist, clumsy, or distorted, just as there are on all forms of social media. Lorenz describes it in tweets as a "toxic culture" where there's been a "failure to invest or prioritize user safety," reminding followers that "CH has no policy or guardrails against disinformation."
All of this is true, but it's not justification for government regulators to swoop in or app creators to be bigger hall monitors. But it's also true that conversations between people in the real world often include things that are untrue or not fact-checked; things that are distorted or biased; anecdotes that have been stretched and extrapolated, used in place of actual data; things that are unkind and hurtful. Clubhouse is more raw than other forms of social media. It's fast and off-the-cuff, which can be both a feature and a bug. It's intimate—you hear people's voices, their children in the background, the background noise when someone is tuned in while driving a car.
And there's another hard truth to grapple with: Unsurprisingly, when you moderate content and kick extremists off platforms like Twitter and Parler, they don't wholly abandon their bad ideas, but rather shift to encrypted apps, private messages, or new forms of getting in touch. As one Associated Press headline worried last month, "Extremists exploit a loophole in social moderation: Podcasts." No matter how hard you try to stamp out bad behavior, it will shift to other platforms and environments. Ones that might even be harder to keep tabs on.
So far on Clubhouse, I've listened to libertarians worldwide talk about what COVID-19 lockdowns have looked like in the E.U.; to Uighur activists and foreign correspondents talk about Beijing's recent crackdowns on Hong Kong and the movement of expelled foreign journalists to Taiwan; to New Yorkers trying to chart the course of the city's pandemic recovery and whether we should sever ties now or stay loyal to a place that hardly cares whether we stay or leave (though its apathy is part of its allure). None of these conversations have included a Clubhouse-affiliated monitor, and I can't access a transcript of what was said or who said it—which might work in my favor since I got a little too sassy about the utter inanity of some New Yorkers continuing to double-mask post-vaccination. None of the information being exchanged was fact-checked—almost like a real conversation, in which you have to use your own critical thinking skills to judge the veracity of what's being said and the credibility of who's saying it.
I don't need Clubhouse content moderators to swoop in at the government's (or reporters') behest to prevent my delicate ears from hearing unvetted, unverified information. I can judge for myself what's useful and true. You probably can, too.
Show Comments (76)