Twitter Sucks Because We Suck. Don't Blame @Jack
If social media feels like a cesspool, don't go swimming.
A lot of criticism of Twitter takes the form of public tweets aimed at Twitter founder and CEO Jack Dorsey (@jack). Those tweets have heated up in recent years because Twitter is President Donald Trump's second-favorite tool for reaching his base. (Perpetual campaign rallies ranks number one, because of all the cheering.) These days, many of the complaints charge that Dorsey and his company aren't doing enough "conversational health work" to make Twitter an inclusive public forum for divergent opinions that also reduces or prevents "abusive" speech.
The hard fact is, no matter how much Dorsey commits himself to making Twitter a safe space for debate, conversation, and entertainment, he's always going to be criticized for not doing enough. (In this, Dorsey has the small comfort of not being Mark Zuckerberg, who I'm guessing gets orders of magnitude more criticism because Facebook is orders of magnitude more successful—despite today's market slump.) Dorsey will remain in the crosshairs as long as he runs the company—that's because, if you're running a social-media platform, there's no version of top-down censorship of "abusive" content that works out well.
Many of the complaints have focused on whether, under Dorsey's leadership, Twitter is adequately (or consistently) following through on the company's commitment "to help increase the collective health, openness, and civility of public conversation, and to hold ourselves publicly accountable towards progress." Why are the complainers tweeting Dorsey personally? It's partly because Twitter's commitment to the "collective health" of "public conversation" comes from a pinned Tweet on Dorsey's Twitter page. Twitter wants us to understand that the company is devoting more resources to policing its Terms of Service agreement (a.k.a. TOS) that incorporates the "Twitter Rules."
Per the TOS and the Twitter Rules, users are forbidden to use the service for illegal purposes, including active misrepresentation and fraud. They're also officially barred from "abusive behavior," which can include verbal harassment, "unwanted sexual content," and "hateful conduct." That last category warrants its own explanatory page, which provides a nonexclusive list of types of "hateful conduct"—threats, racism, and, if I read it right, pro-genocide content. But Twitter also explains that "context matters": a tweet that might seem like "hateful conduct" considered in itself may in fact be a parody of such content in the context of a larger exchange—or even a direct quote of somebody else, reproduced so it can be analyzed and criticized.
Twitter's policies, taken together, quite properly underscore the fact that human speech and writing are tricky media. Trying to police them simplistically (by banning racist invective, say) can result in the suppression of speech of high social value (like tweets that identify and criticize racist invective). Computer algorithms aren't great at identifying context—we don't yet have I-A.I. (Ironic Artificial Intelligence). And, sadly, human beings tasked to respond to complaints about TOS violations aren't always reliable either.
On top of the difficulty of judging (quickly) whether tweets are "abusive," there's also the problem that the judgment is post-hoc. So if you have an active Twitter account, you can post pretty much anything, no matter how offensive or incendiary, with the confidence that it will remain visible to other Twitter users for a while, and perhaps indefinitely. Twitter expressly states in its policies that it relies to a large degree on user complaints to identify content problems in a timely way, but there will always be some policy-violating content that falls through the cracks. At the same time, mischievous users have learned that they can game the complaint system to shut down their tweeting opponents, temporarily or permanently, by reporting them as violating Twitter policies. Neither Twitter nor any other platform has the technology, resources, or personnel to make perfect decisions about whether tweets violate Twitter policies and need to be deleted (or whether users responsible for the tweets in question should be shut down, temporarily or permanently).
Worse, they can't even be relied upon to make consistent decisions. Any content policing of the comprehensive sort that Twitter's most stringent critics call for is certain to lead to censorship that can't be rationally defended. Consider the temporary suspension of the account of the brilliant journalist, TV writer, and producer David Simon, who (now notoriously) has tweeted the wish that certain of his virulent Twitter opponents—many of whom are Trump/MAGA supporter—die of "boils" or a venereal-disease "rash that settles in your lying throat." Simon apparently discovered he'd been suspended when he wanted to tweet something about the death of his friend Anthony Bourdain. Once the suspension was lifted, he learned that some of his tweets had been removed and—of course—wished a plague of boils on Jack Dorsey. "As far as I'm concerned," he wrote, "your standards in this instance are exactly indicative of why social media—and Twitter specifically—is complicit in transforming our national agora into a haven for lies, disinformation, and the politics of totalitarian extremity."
I asked Simon in a tweet whether he was calling for the more censorship, but just of a different kind. His response?
"You will notice that I have at no point urged Twitter to remove others. Only to preserve the legitimacy of replying to liars, frauds and fascists with a full-throated range of abiding contempt. To require us to engage with such people seriously is to validate the blood libels." In short, Simon's complaining about inconsistency as well—he believes, not unreasonably, that the best answer for hateful speech is to answer it with contempt.
What this means is, short of having a system in place in which all users' tweets are prescreened by an editor before they become public, we're not going to see any top-down editorial policy—even one informed by complaints from the user community—that works well on the kind of large platform that Twitter is (and that Facebook and other social-media platforms are). Heavy top-down administrative moderation may have worked well enough on the smaller private forums, and on the PC-based bulletin board systems (BBSes) of decades ago. But if the large-scale dominant platforms—not just today's, but tomorrow's as well—are pressured into censoring more and more content, the complexity of identifying what really counts as "abusive" speech guarantees that some large fraction of the user population will be unhappy with the results.
The platform companies know this. They're (mostly) quite aware that Section 230 of the Communications Decency Act (including its most recent amended version) gives them the right to curate user content. They're also painfully aware that invoking that right leads both to higher expectations of editorial control and to more and more dissatisfaction as users disagree with particular editorial decisions. Even so, Twitter, like all the other dominant platforms, is investing more in finding ways to reduce user complaints about abusive content. But until we have built our Ironic A.I., the best fix is still to remind users they can make their own decisions about what to say and what to hear.
Show Comments (105)