Coronavirus

Anti-Trump Democrats Learn That Internet Censorship Blocks Them Too

Anyone who wants to restrict free speech should contemplate what it would be like if your enemy gets to choose what gets said.

|

Last fall, the most-enlightened folks among us praised the move by some tech giants to police and even censor political ads. Sure, back in the day, Barack Obama had used Facebook and micro-targeting to good effect. Indeed, his campaign's embrace of new ways of reaching young people (including using data from "unknowing users") showed how liberals generally were so much more tech-savvy and forward-looking than old-school campaigns run by the likes of John McCain and Mitt Romney, who might as well have been wearing spats and sporting pocket watches. Then 2016 happened and it turned out that Donald Trump was a master of social media and Hillary Clinton was revealed as the hapless grandma who couldn't even work her Jitterbug phone.  The super-retro real-estate mogul from Queens—who doesn't even use email, fer chrissakes!—connected tremendously with all the mouth-breathers out there on Facebook, Twitter, Pornhub, whatever, and squeaked into the White House. Trump's digital-media guy, Brad Parscale (Brad!) was the genius, while Clinton's Robby Mook, once-always described as a guru, was the chump, a digital-era Joe Shlabotnik.

To the technoscenti, the obvious answer to such a turn of affairs was to ban or restrict political advertising online, often in the name of saving the Republic. Jack Dorsey of Twitter paused from taking meditation retreats in genocide-scarred Myanmar long enough to announce that he was banning political ads from his microblogging site, and people who only belatedly realized that non-liberals could be savvy at making memes breathed a sigh of relief. The same people lost their shit when Facebook's CEO Mark Zuckerberg said he would continue to allow ads and he wasn't even going to fact check them, either. Hadn't this alien lifeform done enough damage to America already? When Google, the 800-lb. gorilla of online advertising, announced plans to restrict various forms of targeting and to police "false claims," there was much rejoicing. As Kara Swisher, the founder of the great Recode platform who now writes for The New York Times, put it, Google's decision to heavily restrict meant that Parscale "will now have one less weapon in his digital arsenal to wage his scorched-earth re-election campaign."

But as Techdirt's Mike Masnick writes, it turns out that censorship tends to work in mysterious ways, at least if you don't understand that you won't always be the censor. Here we are, in an election year during a full-blown pandemic whose severity many want to blame on Donald Trump's inaction. Trump's political opponents understandably want to flood the internet with messages about just how bad the "cheeto in chief" really is, but they're running into the very restrictions they were applauding last fall. "Content moderation at scale is impossible to do well, writes Masnick,

and…things are especially tricky when it comes to content moderation and political advertising. Now, when you mix into that content moderation to try to stop disinformation during the COVID-19 pandemic and you run up against… politicians facing blocks in trying to advertise about Trump's leadership failures in response to the pandemic.

He cites a report from Protocol that reads in part:

Staffers of several Democratic nonprofits and digital ad firms realized this week that they would not be able to use Google's dominant ad tools to spread true information about President Trump's handling of the outbreak on YouTube and other Google platforms. The company only allows PSA-style ads from government agencies like the Centers for Disease Control and trusted health bodies like the World Health Organization. Multiple Democratic and progressive strategists were rebuked when they tried to place Google ads criticizing the Trump administration's response to coronavirus, officials within the firms told Protocol. (emphasis in original.)

Masnick notes that while political orgs face various forms of blocking and restriction, federal agencies do not and they are pumping out all sorts of public-service announcements and other ads geared around preventing the spread of coronavirus. Given that "this administration appears to view the entire apparatus of the federal government as solely part and parcel of the Trump re-election campaign, that basically means that Trump gets free reign over Google ads." He stresses that he's not criticizing Google but rather trying to underscore a point that always needs to be made when talking about online speech:

If you do content moderation, almost every "policy" you put in place will come back to bite you when you realize that, in practice, something will happen that seems insane even when you have a perfectly logical policy in place.

Masnick's whole piece is here.

Let's take this a step further: Yes, every individual platform has a right to ban or suspend whoever or whatever it wants (and thanks to much-beleaguered Section 230, every platform has the right to moderate some user-generated content without putting itself at legal risk, too). I'm sympathetic to the argument that some platforms want to limit viewpoint diversity or exclude certain types of speech and information as a way of creating or maintaining community.

But as you approach the size and scope of a Facebook, YouTube (owned by Google), and Twitter, the only real way forward is to trust your users to practice media literacy and to give them the tools they need to create the user experience they want. Masnick is right that content moderation, including picking what political ads are acceptable and which ones aren't, is virtually impossible to do at scale. You either end up spending all of your time chasing down complaints or you create filters that simply block a ton of content in which your users are actually interested.

Just as it is in meatspace, the answer to bad, misleading, stupid, hateful speech is more and better speech. The power of online persuasion, including and possibly especially the role of Russian trolls in the 2016 election, has been wildly overstated. In an age of deepfakes, good-faith arguments, all sorts of data and versions of reality, and the growing participation of more types of people with all sorts of viewpoints, consensus will be harder to reach than ever. The beauty of online forums is that they give us more control over what we encounter even as they force us to check our premises. That all goes away if and when we cede control to gatekeepers who may not be any smarter or disinterested than we are.