Surveillance

Apple Will Start Checking Your Messages for Dick Pics

For the children, of course

|

Apple has seemingly capitulated to some degree in the encryption wars. This week, the company announced it will be implementing tools that will scan everybody's iPhone and iPad messages for porn (and especially underage porn), all for the sake of protecting the children.

The new systems announced this week may seem innocuous, but only to those who have not paid any attention to how tech surveillance systems can be abused.

There are a few main components to this week's announcement. First, Apple will be scanning photos on people's devices when they attempt to upload them to iCloud to see if they match known images of child pornography. If the scan comes up positive, the image will be reviewed by a person, and if it contains child porn, the account will be disabled and the report will be sent to the National Center for Missing and Exploited Children. There is an appeal process if Apple is mistaken.

Apple promises that nobody will actually be snooping on your photos unless there's a computer match via a complex cryptographic technological program designed to keep tabs on child porn images that circulate on the internet. Apple claims there's "less than a one in one trillion chance per year of incorrectly flagging a given account."

The second component seems a lot more expansive. Apple will be adding tools to its messaging app to warn children when somebody is texting them sexually explicit photos. In order for that to work, Apple's messages will use "machine learning to analyze image attachments and determine if a photo is sexually explicit." Apple says that this scan is designed so that nobody at Apple actually gets access to the messages.

The announcement of these new scanning systems has got privacy alarms ringing everywhere, and experts are warning about the very serious potential for abuse. The privacy-oriented Electronic Frontier Foundation (EFF) describes these new scans as a "shocking about-face for users who have relied on the company's leadership in privacy and security."

Apple has been extremely resistant to efforts by the federal government to add back doors to end-to-end encryption on its devices. This encryption keeps third parties (like Apple) from accessing data passed between users. The Department of Justice famously got into a legal spat with Apple over trying to force the company to unlock a phone and provide access to data of one of the terrorists responsible for the mass shooting in San Bernardino in 2015. Apple representatives have explained (as have many, many privacy and tech experts) that "back doors" that bypass device encryptions cannot be devised in such a way that only the "proper authorities" can get access to the data. Once there's a way to get through encryption, any number of bad actors—be they criminals or corrupt, authoritarian governments—will be able to figure out how to get through it.

These new tools are essentially a form of encryption bypassing, though it perhaps doesn't look like it because Apple employees won't be looking at most of the images unless scans for child porn trigger review.

But to be clear, this is an encryption back door of sorts. Apple will be using tech to scan the contents of your images when you are either uploading them for storage or messaging minors. And while this is all in the service of fighting child exploitation and abuse, there is absolutely no reason why anybody should believe that it will end here. EFF notes:

All it would take to widen the narrow backdoor that Apple is building is an expansion of the machine learning parameters to look for additional types of content, or a tweak of the configuration flags to scan, not just children's, but anyone's accounts. That's not a slippery slope; that's a fully built system just waiting for external pressure to make the slightest change. Take the example of India, where recently passed rules include dangerous requirements for platforms to identify the origins of messages and pre-screen content. New laws in Ethiopia requiring content takedowns of "misinformation" in 24 hours may apply to messaging services. And many other countries—often those with authoritarian governments—have passed similar laws. Apple's changes would enable such screening, takedown, and reporting in its end-to-end messaging. The abuse cases are easy to imagine: governments that outlaw homosexuality might require the classifier to be trained to restrict apparent LGBTQ+ content, or an authoritarian regime might demand the classifier be able to spot popular satirical images or protest flyers.

EFF notes that "mission creep" is already taking place. The tech used to scan child pornographic images is being used to create a database of what is defined as "terrorist" content so that it can be blocked from being published online. EFF notes that social media platforms have been, intentionally or not, flagging and removing critical content that has misclassified any sort of documentation of violence as "terrorism."

If the American government attempted to implement scanning systems like this, it would most certainly be understood as an unconstitutional warrantless search. It would violate the Fourth Amendment for the U.S. to scan all our images as we share them to make sure, in advance, they aren't pornographic.

Apple, of course, is a private company and so we do not have the same protections with the data we provide to them. We also know that countries across the world, including the United States, England, and Australia, have been constantly flogging this fear of child pornography as an excuse to try to pressure companies like Apple to compromise our privacy in ways that would probably be ruled illegal if the government itself did it.

Apple seems to have blinked here, and it's unfortunate. This is most certainly a "camel's nose under the tent" moment. What Apple's doing now is designed to appear unobjectionable. But it's creating a framework for serious abuse of surveillance tools down the road.