Censorship

How Long Before This Tool to Censor Images from Terrorists Gets Misused?

Amid European calls for speech crackdown, social media companies introduce tool for easier deletions.

|


Google
Imagine China/Newscom

Four major tech and social media companies—Twitter, YouTube, Google, and Facebook—are combining to censor the internet! But they're doing it for a good cause (and because of government pressure), they say. We're going to have to see what actually comes of it.

The four companies announced that they're working together on a tool that will help them prevent imagery or content produced by terrorists from spreading online. Google in Europe explains:

Starting today, we commit to the creation of a shared industry database of "hashes" — unique digital "fingerprints" — for violent terrorist imagery or terrorist recruitment videos or images that we have removed from our services. By sharing this information with each other, we may use the shared hashes to help identify potential terrorist content on our respective hosted consumer platforms. We hope this collaboration will lead to greater efficiency as we continue to enforce our policies to help curb the pressing global issue of terrorist content online.

Our companies will begin sharing hashes of the most extreme and egregious terrorist images and videos we have removed from our services — content most likely to violate all of our respective companies' content policies. Participating companies can add hashes of terrorist images or videos that are identified on one of our platforms to the database. Other participating companies can then use those hashes to identify such content on their services, review against their respective policies and definitions, and remove matching content as appropriate.

As we continue to collaborate and share best practices, each company will independently determine what image and video hashes to contribute to the shared database. No personally identifiable information will be shared, and matching content will not be automatically removed. Each company will continue to apply its own policies and definitions of terrorist content when deciding whether to remove content when a match to a shared hash is found. And each company will continue to apply its practice of transparency and review for any government requests, as well as retain its own appeal process for removal decisions and grievances. As part of this collaboration, we will all focus on how to involve additional companies in the future.

To start with the obvious response: There's nothing inherently wrong or inappropriate about the companies working together and censoring violent content or declining to host it on their platforms.

Ultimately, though, how this tool gets used is what matters. Once a tool can be used to censor, en masse, a violent photo from some terrorist of the Islamic State, that tool can be used to censor anything in similar broad strokes. Recall that Facebook recently had an odd little controversy when it temporarily censored a well-known, historically significant photo from the Vietnam War because it contained nudity.

Leaders in European countries, where they don't have nearly the level of commitment to free speech when people say things that those in power deem to be bigotry or hate speech, are pushing social media platforms to engage in wider forms of censorship of content.

As Andrea O'Sullivan noted earlier today, social media companies are beginning to embrace a "gatekeeper" mentality after previously marketing themselves as free-wheeling communication platforms. Will they resist the pressure to use this technology to censor other forms of content at the request of governments?