The Volokh Conspiracy
Mostly law professors | Sometimes contrarian | Often libertarian | Always independent
"Apple Plans to Scan US iPhones for Child Abuse Imagery"
From Irish Times (Madhumita Murgia & Tim Bradshaw):
Apple intends to install software on American iPhones to scan for child abuse imagery, according to people briefed on its plans, raising alarm among security researchers who warn that it could open the door to surveillance of millions of people's personal devices….
The automated system would proactively alert a team of human reviewers if it believes illegal imagery is detected, who would then contact law enforcement if the material can be verified. The scheme will initially roll out only in the US….
The proposals are Apple's attempt to find a compromise between its own promise to protect customers' privacy and ongoing demands from governments, law enforcement agencies and child safety campaigners for more assistance in criminal investigations, including terrorism and child pornography….
"It is an absolutely appalling idea, because it is going to lead to distributed bulk surveillance of … our phones and laptops," said Ross Anderson, professor of security engineering at the University of Cambridge.
Although the system is currently trained to spot child sex abuse, it could be adapted to scan for any other targeted imagery and text, for instance, terror beheadings or anti-government signs at protests, say researchers. Apple's precedent could also increase pressure on other tech companies to use similar techniques….
It would be important to learn just how much government pressure Apple was under to implement such a feature, or even whether the government actively solicited this (even in the absence of coercive pressure). Some courts have concluded that the Fourth Amendment applies even to private searches if the police "instigated" or "encouraged" the search, and the private entity "engaged in the search with the intent of assisting the police"; see also, for instance, this decision and this nonprecedential decision. The Supreme Court's Skinner v. Railway Labor Executives' Ass'n points in that direction as well (though the program there had some special features, such as removal of legal barriers to the searches). Other courts, though, conclude that mere "governmental encouragement of private 'searches'" isn't enough, and that the private search becomes government action covered by the Fourth Amendment only if there is compulsion (perhaps including subtle compulsion).
Note, though, that there's also a twist here: The Court has held that police drug dog sniffs of luggage aren't "searches" for Fourth Amendment purposes because they "disclose only whether a space contains contraband" (setting aside the possibility of drug dog error), and thus don't invade any legitimate privacy interest. Could hash-value-based searches be treated the same way, so that even if Apple's search is treated as government action subject to the Fourth Amendment, it wouldn't be treated as a "search"? That's unsettled, see U.S. v. Miller (6th Cir. 2020):
Did the hash-value matching "invade" Miller's reasonable expectation of privacy? According to the Supreme Court, binary searches that disclose only whether a space contains contraband are not Fourth Amendment "searches." Illinois v. Caballes (2005). The Court has held, for example, that the government does not invade a reasonable expectation of privacy when a police dog sniffs luggage for drugs. United States v. Place (1983). Yet the Court has also held that a thermal-imaging device detecting the heat emanating from a house invades such an expectation because it can show more than illegal growing operations (such as the "hour each night the lady of the house takes her daily sauna and bath"). Kyllo v. U.S. (2001). Which category does hash-value matching fall within? Is it like a dog sniff? Or a thermal-imaging device? We also need not consider this question and will assume that hash-value searching counts as an invasion of a reasonable expectation of privacy. Cf. Richard P. Salgado, Fourth Amendment Search and the Power of the Hash, 119 Harv. L. Rev. F. 38 (2005).
If any of you know more about the governmental involvement in this decision, or for that matter the broader state action law related to such searches, please let me know. Thanks to Christopher Stacy for the pointer.
UPDATE: Here's Apple's announcement:
Another important concern is the spread of Child Sexual Abuse Material (CSAM) online. CSAM refers to content that depicts sexually explicit activities involving a child.
To help address this, new technology in iOS and iPadOS will allow Apple to detect known CSAM images stored in iCloud Photos. This will enable Apple to report these instances to the National Center for Missing and Exploited Children (NCMEC). NCMEC acts as a comprehensive reporting center for CSAM and works in collaboration with law enforcement agencies across the United States.
Apple's method of detecting known CSAM is designed with user privacy in mind. Instead of scanning images in the cloud, the system performs on-device matching using a database of known CSAM image hashes provided by NCMEC and other child safety organizations. Apple further transforms this database into an unreadable set of hashes that is securely stored on users' devices.
Before an image is stored in iCloud Photos, an on-device matching process is performed for that image against the known CSAM hashes. This matching process is powered by a cryptographic technology called private set intersection, which determines if there is a match without revealing the result. The device creates a cryptographic safety voucher that encodes the match result along with additional encrypted data about the image. This voucher is uploaded to iCloud Photos along with the image.
Using another technology called threshold secret sharing, the system ensures the contents of the safety vouchers cannot be interpreted by Apple unless the iCloud Photos account crosses a threshold of known CSAM content. The threshold is set to provide an extremely high level of accuracy and ensures less than a one in one trillion chance per year of incorrectly flagging a given account.
Only when the threshold is exceeded does the cryptographic technology allow Apple to interpret the contents of the safety vouchers associated with the matching CSAM images. Apple then manually reviews each report to confirm there is a match, disables the user's account, and sends a report to NCMEC. If a user feels their account has been mistakenly flagged they can file an appeal to have their account reinstated.
This innovative new technology allows Apple to provide valuable and actionable information to NCMEC and law enforcement regarding the proliferation of known CSAM. And it does so while providing significant privacy benefits over existing techniques since Apple only learns about users' photos if they have a collection of known CSAM in their iCloud Photos account. Even in these cases, Apple only learns about images that match known CSAM.
Editor's Note: We invite comments and request that they be civil and on-topic. We do not moderate or assume any responsibility for comments, which are owned by the readers who post them. Comments do not represent the views of Reason.com or Reason Foundation. We reserve the right to delete any comment for any reason at any time. Comments may only be edited within 5 minutes of posting. Report abuses.
Please
to post comments
They say they’re going to do it for child pornography.
But of course, if you can do it for child pornography, you can do it for anything.
Also, it seems hard to understand how sniffing by police dogs could be a relevant analogy. Apple is communications provider. It supplies hardware; it’s not a social media provider subject to Section 230.
Isn’t what’s being proposed wiretapping? That would seem the most on-point source of law. Tapping people’s phone conversations etc. on the possibility you mught find something is witetapping. It has no analogy at all to sniffing by police dogs. Nor does it have anything to do with messages or comments posted on a social media platform’s own website.
>But of course, if you can do it for child pornography, you can do it for anything.
And, importantly, Apple can no longer argue that various subpoenas, etc. should be suppressed because it's "impossible to comply." That is, even if Apple wants to limit it to kiddie porn, others will force them to broader.
Every time the US builds a backdoor for this, or terrorism, the no-longer imagined boot stepping on the human face, forever, for 2/3rds of the world steps a little harder, a little foreverer, as they can use the same tech, and argue their own justifications for the external dupes who care.
We just want to search you house for (fill in the blank). Surely you don't object to this reasonable action to preserve public safety. You don't have anything to hide do you, otherwise why would you possibly object?
Is there supposed to be a connection between these two sentences?
You perfectly nailed my thought in your first two sentences.
1. The article says that Apple would scan stored files, not communications.
2. Presumably Apple will include consenting to this monitoring in its terms of services.
Every communication is also a stored file.
That's basically true for some forms of communication like e-mail, plausibly true for other forms of communication that log interactions like instant messaging, and not at all true (by design) for forms of communication that are intentionally ephemeral such as Snapchat or Signal.
"Who completely agreed -- they ALL AGREED."
If users don't consent, they legally brick their device. That sounds like duress.
It *IS* wiretapping because it is all digital data now, including voice conversations.
Hence all of the wiretapping decisions go into the toilet if the TelCo "volunteers" to do the wiretapping for the police. And it would actually be easier to "digitally wiretap" for specific code words than to look for specific images.
This is scary....
I agree that if Apple implemented a different form of surveillance that was wiretapping, that would be wiretapping.
What you're failing to understand is that "wiretapping" has become a shorthand term for surveillance, it no longer means just connecting alligator clips to a phone line.
I assumed ReaderY was suggesting that this conduct was illegal wiretapping, i.e. an interception that violated the Electronic Communications Privacy Act. If you're just using it to mean something that you find too intrusive, then that's a different (if a bit circular).
"This is scary…."
LOL, ooops, you accidentally got an up skirt of your 10yo niece playing with the family dog. That'll be 25 years on the s*x offender's registry....
I use tracfone, and, you know, I use it for phone calls. They can't even prove it belongs to me
I'd note that, if you know the hashing algorithm, hash based searches can be spoofed.
In two ways:
1) You can, automatically even, alter a genuine match image to fail to match. The application for people actually sharing the real images is obvious.
or,
2) You can generate a hash collision between a genuine match image you've created and seeded where it will be found, and an image that would not have been designated by a human.
So, you could take some real child porn, (Again, if you knew the hash algorithm.) and alter it to produce the same hash code as some innocent image that you know would be found on your target's phone. Perhaps some meme that's being shared around?
Then you take your altered image, and put it where it will be found and flagged.
Suddenly, a bunch of people sharing a meme you don't like get flagged as having child porn! Or maybe it's a family photo of a politician you're out to embarrass.
See, the key here is that the algorithm doesn't actually recognize what's in the image. It just does a mathematical operation on the image to produce a reasonably small number that you can look for in the future when checking other images.
The bottom line is, even if they approach this legitimately, it opens up the possiblity of some really nasty pranks on the part of anybody who has the algorithm.
I don't know the legal significance of the fact that Apple would need access to a large collection of child porn to generate the hash table in the first place...
Large enough that you could prank the system up front by inserting some innocent images into it, of course...
How? All a hit on the hash value does is prompt someone to view the flagged file (and, I'd imagine, trigger an investigation of the pranker if it doesn't correspond with what it's supposed to.)
Looks to me like they're storing a hash file on your local device, and examining locally any files you try to upload or send to somebody, in addition to looking at files already in their cloud. It would be remarkably easy to expand this to examining locally stored files even if you don't. And, how much space is that hash file going to take up?
I would say that only an idiot would upload child porn to ICloud, but that of course doesn't mean people don't do it, and Apple products are pretty aggressive about trying to put things onto their cloud anyway. (As I've found trying to get this darned IPad to just print directly to my networked printer.
Anyway, just confirms my decision never to buy Apple products. And just when I was starting to respect them for their privacy protection. Because, as noted, they might be starting with searching for child porn, but it probably won't end there.
I don't see any indication that this system is doing anything to files sent to another user.
Camel's nose, see tent.
It's obviously just the first step because who wants to argue against stopping kiddie porn?
It really only makes sense if they are looking for a specific set of images. Any generic image recognition would have enough false positives to get Apple sued for billions by people who turn out to be innocent.
They’d want to use something more sophisticated than the simple hashes you’re talking about. They could make it prohibitively difficult (but never impossible) to trick into giving false positives.
They would never have people look at the actual images to confirm the content. Too much liability for privacy invasion.
I guess I don’t believe they will do such a search. At most I guess they would only do it on specific phones when asked.
"1) You can, automatically even, alter a genuine match image to fail to match. The application for people actually sharing the real images is obvious."
This is why they're using neural networks and (I presume) fuzzy hashes. Trivial edits to files to change the hash are unlikely to allow you to successfully evade this.
"2) You can generate a hash collision between a genuine match image you’ve created and seeded where it will be found, and an image that would not have been designated by a human.
So, you could take some real child porn, (Again, if you knew the hash algorithm.) and alter it to produce the same hash code as some innocent image that you know would be found on your target’s phone. Perhaps some meme that’s being shared around?"
This is possible for state actors, not for pedophiles trying to mess with their neighbors. The cheapest known attack along these lines on the previous generation of hashing functions still costs tens of thousands of dollars per collision if you want to generate something actually usable in a spoofing attack like this.
Why would someone plant false hash collision material which would result in that coming out when the police looked at the device, when they could just plant actual CSAM and guarantee a conviction, because no court or jury will entertain a "pedophile" arguing "but hackers put it there!"
Rule of thumb. If its 'for the children' its a terrible idea that would not be countenanced otherwise.
Assuming such a system is implemented (I never trust non-tech journalists to get tech stories right), my first question is: are they planning on doing this on their iCloud servers? If so, then there's an argument to be made that, by uploading your photos to a third-party server, you don't have a reasonable expectation of privacy. (I don't necessarily agree with that argument).
If it's done on your phone, which is advertised as being secure, then that's a different story. But they could theoretically get around that via the non-optional licensing agreement you agree to any time you use the iPhone software.
And Apple, or anyone, would not need a large collection to generate such hashes: they are usually generated by the National Center for Missing and Exploited Children or law enforcement agencies and then distributed to companies like Apple, Google, etc. so they can run the hashes against what they have.
And although it's easy to alter (by one pixel even) an image and produce an entirely new hash, most perverts don't know that and share the images unaltered.
And although theoretically possible, it's *very* difficult to alter an innocent image to match that of a known bad one. And even then, the story says Apple will then take any known images and subject them to human screening to make sure that they aren't "innocent" images.
Still, it does raise privacy concerns from a company that says they consider privacy a human right. I wonder if this will be an opt-in thing on the phone (which I doubt), or a server-side only thing (so you don't have to use iCloud).
Still, it's all just talk until we hear the actual plans.
"they are usually generated by the National Center for Missing and Exploited Children or law enforcement agencies and then distributed"
Government is smut peddler. Best bet for a pervert is to go work for the National Center for Missing and Exploited Children it seems.
Not to kink-shame, but I agree that a sexual fixation on a list of hash values would indeed be pretty out there.
Well he's still right in general. The world largest distributor of CSAM is the joint FBI/Australian task force that keeps taking over and operating the distribution websites. They dipped their toe in, and after very little blowback from first time they did it because it 'was only 11 days', they now run them for months, and the bigger ones indefinitely. They operated one for 11 months until they were outed by a newspaper investigating who owned the site. How many new victims had their abuse shared for the first time by the federal government?
Nailing people for possession doesn't ethically justify operating distribution networks. Constant takedowns, as quick as possible, make it difficult for users to find new sites, and the quicker site operators are arrested, the more new ones are discouraged. That results in less overall distribution than taking all the sites, upgrading them to fat pipes in government datacenters, and operating them indefinitely.
"And although theoretically possible, it’s *very* difficult to alter an innocent image to match that of a known bad one."
I'm talking about altering a 'bad' one to match a known innocent one. And I think you're dramatically overstating how difficult it is, if you have the hashing algorithm.
But it seems Apple is going to settle that question for us, aren't they.
"I’m talking about altering a ‘bad’ one to match a known innocent one. And I think you’re dramatically overstating how difficult it is, if you have the hashing algorithm."
It's extremely difficult. Cryptographic hashes are designed to be resistant to this sort of attack. Such attacks are theoretically possible but VERY expensive.
hey are usually generated by the National Center for Missing and Exploited Children or law enforcement agencies
Am I the only one who wonders -- really wonders -- about the people whose career choice involves viewing kiddie porn all day, every day...
It almost seems like if you are sick enough to be into this stuff, all you have to do is get a job with these folks who purportedly "fight" it and you can get *paid* to view it....
My company actually did contract work for NCMEC on a program I was on - it seemed like most of the people there spent their time pouring over missing person reports and newspaper articles.
The child-porn division was apparently a small group of people with a very high turnover rate because of the stress involved.
It's an extremely awful job; as Toranth notes, the people that due it turn over extremely quickly and suffer from PTSD from the stress and horror:
https://www.wired.com/2014/10/content-moderation/
According to the announcement, the scanning will be of images uploaded to iCloud. Although the article seems to be implying that it will also scan images only stored locally on the device (and that was how I read it at first), I don't think it is actually claiming that.
>And although theoretically possible, it’s *very* difficult to alter an innocent image to match that of a known bad one.
OTOH, this is a many-to-many game, not a one-to-one game, so it's more doable.
>And even then, the story says Apple will then take any known images and subject them to human screening to make sure that they aren’t “innocent” images.
Is my data (including my photos) not encrypted on iCloud? Or is Apple hacking into my phone and downloading them (in violation of the CCFA)?
A lot depends on the details of the algorithm for generating the hash code. Which they are NOT going to share, because sharing it would be a blueprint for defeating it.
Content stored in iCloud is encrypted, but Apple has the keys. Apple will, e.g., provide various iCloud content to law enforcement pursuant to a search warrant.
Just one more reason why I do not use iCloud. And i only use subscription services to transmit very large, non-private files.
Amen to that. But, as I understand it, as long as you have any images on your physical iPhone, those are still going to be caught up in the scanning that occurs on the local device. And you have no idea what the system is potentially “flagging” as naughty content. If something is flagged, additional identifiers will be generated and stored on your device, with no ability for you to stop that. So if someone else got their hands on your phone, they could have a good time rifling through any such stored identifiers.
Apple is apparently the first one to do this. But what about other device/OS/cloud providers following suit? For example, will MS do a similar thing on Windows and OneDrive?
So know you know why I use a flip phone
What is your basis for claiming that there is any "scanning that occurs on the local device"?
"Instead of scanning images in the cloud, the system performs on-device matching using a database of known CSAM image hashes provided by NCMEC and other child safety organizations. Apple further transforms this database into an unreadable set of hashes that is securely stored on users' devices."
If you think that interpretation is wrong, let me know.
It depends almost entirely on the hash algorithm used.
Images can actually have their bit values modified significantly without changing the visuals in a way a human can detect - this is the concept behind image steganography. For simple hash algorithms - such as MD5 - modifying an image to match a hash is possible with current computing power. For small images, it isn't even difficult: almost 10 years ago, a researcher did it for a 64x64 image for about $0.65 in AWS expenses. If you aren't trying to create a collision with a specific child porn image, but merely any one of the hundreds of millions of known images, it becomes massively easier as well.
This is why there are hash algorithms specific to images that are either resistant to minor changes or ignore the exact bit values in favor of the human-perceived colors. See: Wavelets, block averaging, differencing, etc.
Without knowing exactly how Apple is doing it, it is difficult to know how easy it would be to spoof (or evade) detection.
Of course a 64x64 bit image is tiny by modern standards. But your point stands that one would not use a cyptographic hash that is design to yield values very far way if even a single input bit is changed.
The scheme here wants an image hash that is close to a specified hash if the image is close to the original specified image.
And there are image recognition hashes that do that, that are even largely indifferent to alterations of size and orientation.
But, minimize type A errors, you maximize type B errors. The better the hash is at seeing through minor alterations to the image, the more likely false positives become.
My daughter sent me a photo of my baby grandchild in the bathtub. I'm toast.
You very over toasted toast.
You better alter a few pixels at least.
Two words for you;
Buy Android.
That isn't how this works at all.
There is a database of images of child abuse that are scanned by an algorithm that provides a hashtag based on the exact arrangement of pixels in the image.
If your picture that you upload to icloud matches something in the known child abuse image database, you get flagged. So if you aren't sharing an image from that database, and an image of your grandkid won't be in the database unless its an image of child abuse because the databases are manually curated, then you are fine.
So don't chare CP and no issues.
"a hashtag based on the exact arrangement of pixels in the image."
If it were that, there'd be no point in the hashtag, because the hashtag would be as big as the image. A hash tag is actually a lot smaller than the domain of possible images, so it's more than possible for different images to generate the same hash tag. Like, you generate a 32 bit number where the first digit is the parity of the file, the 2nd is the parity of every other bit, the third is the parity of every third bit... (Just an example, that algorithm would be lousy.) The file can be arbitrarily large, and will always generate a 32 bit hash tag. Many different files will generate the same hash tag.
Then you get what's known as a 'collision'.
Collisions are basically inevitable in large data sets. In order to resolve them, you need access to the original image, not just the hash tag that was generated. That, I assume, is where the manual curation comes in, and your privacy gets actually, not theoretically, violated.
In this application Big Brother wants collisions, just not too many of them.
Sounds like a proof of concept for the Chinese Government.
More like from, not for - - - - - -
Apple has announced it:
https://www.apple.com/child-safety/
They have the tech sheets there as well.
....I am shocked, and not in a good way, that this is real.
I am unshocked, and not in a good way, that this is real. If you haven't figured out yet that tech firms generally have no backbone when it comes to pressure from the left, you haven't been paying attention.
And it's not like the left are going to desert Apple over this, while Apple probably doesn't mind losing any right-wing customers.
The left?
No. The government. This kind of stuff takes a while to develop so who knows which administration this came from. Quit letting your side off the hook.
Scenarios
It could have started with Dick Cheney but he had other fish to fry. Chances are that it started a couple of years into the BHO administration. The Orange Clown gang would have increased it for other purposes. And now it's a Biden special.
The other scenario, It was a Janet Reno special as were many other domestic surveillance programs
Yeah, yeah, it's just a coincidence that it happened 9 months after an administration Apple would be sympathetic to won the election.
Believe it or not this kind of thing has bipartisan support, Brett.
I also think it's unwise, but your tribalism is pretty silly when it comes to overreaching to stop child abuse.
Define over-reaching, Sarcastr0.
But first, let me ask this: Is Apple doing the right thing here? Why or why not?
Pressure means they are an agent of the government.
For computer repairs, if the tech stumbles across something and reports it, that's ok. If the tech does it because the government pays him, or other arrangement, no.
Threats would seem to be of the same ilk. Like threatening billions in stock losses unless they censor harrassment, oh and start with our politia opponents.
Pressure means they are an agent of the government.
Not the case factually or legally.
Pressure is not great, to be sure, but it does not render one an agent of the government. Think what that logic would mean for criminal confessions or states raising the drinking age.
How did you get the tech sheets. Can you share them?
My bad. They were at the bottom of the page
If possessing and distributing child pornography is a crime that inherently abuses a child than what the heck are LEOs and tech companies doing maintaining gigantic databases (not to mention their unwitting hosting of other material) that probably dwarf the size of any two bit private pedo rings wildest dream of a collection and are probably by far the biggest in existence? It makes no difference that this is supposedly 'for a good cause'. The working theory is that possession is inherently harmful. Its like raping a person to try to prevent other rapes. LEOs are even worse in that they sometimes outright distribute this stuff.
Yes...how dare these institutions that prosecute people for committing atrocities against children *checks notes* retain databases of the images to compare images on the internet to and catch people sharing the files.
Tech companies don't have the database, they have algorithmic hashes that represent the images. The pictures aren't shared with anyone, and the government isn't generating the images. Why do you think the government does these things? Why do you think the government is out there distributing this? Do you have any evidence of that at all?
In this specific case they use hashes which really doesn't matter since its still derived from the exploitation of abuse imagery and would absolutely be considered criminal if the shoe was on the other foot. The standard after all is even cartoon images are sometimes considered CP . But don't fool yourself into thinking thinking they don't have actual databases which they use among other things to train AIs. LEO also take over sites and distribute CP which technically make their worse than a simple consumer.
But the FBI does actually distribute it. They operate the CSAM websites on Tor. Not for a few days, that was just the first time. Now they take over multiple sites and run them indefinitely.
They ran one with 400k members for 11 months, until a newspaper outed them for doing it.
There's no doubt they're de facto increasing the overall amount of distribution just to make more possession cases, a lesser crime. (See my other comment above for a longer explanation of why it's more)
Apple announces that they are inserting software in their phones sold in China to detect evidence/pictures of people used in human slavery operations. Oh, wait, they aren't doing that.
I like to joke that baby-killing and buggery are the only two "rights" that the modern left care about. Sadly, apparently I was correct.
Progressives really are in a hurry ...
Yes, they are in a hurry, they're afraid they're going to lose in the midterms, and are rushing to get their police state finished before then.
Both of you knee-jerk insisting this is a liberal initiative to bring in the police state are really telling a lot about your weird and paranoid worldview.
We do not live in a political thriller.
And exactly which side has been pushing for forced lockdowns, mandatory mask and vaccine orders? Which side has been pushing the most for restrictions on speech? Closing down churches?
Dude: if you cannot be honest about this ...
If your argument is 'the guys I think are bad do all the bad things. This is a bad thing, therefore the bad guys did it' you live in a storybook.
If this became a public thing, politicians from both parties would be pushing each other aside to rush in front of the microphones to support it. Nobody is as against child pornography as your favorite congressperson.
You think that, say, Matt Goetz is going to let some puff progressive hate child porn more than him? LOL.
All of these chuckleheads observe the oath they took to defend the constitution right up to the point that it causes them the slightest inconvenience.
The innocent people that get caught up in this will just be fodder for the politician's virtue signaling.
For Matt Goetz, he was the only no vote against a human trafficking bill. So yeah, he'd probably be the only no vote on this too.
And which side stormed the capitol in an attempt to capture and kill legislators and overturn an election to install a dictator?
I look forward to the "jUsT a ToUrIsT lOvEfEsT!" malicious lies by traitors in response to this. Unamerican assholes.
Sadly, we do live in a political thriller, it's just that the writer is a hack.
No we don't. It's not thrilling, except for your partisan apophenia.
S_0,
Forget this thread.
The assault on privacy get more fast and furious every day.
Big Tech is a Big Target for Big Government.
It is a thriller whether you live in China or the US.
You don't need the Cloud. Storage is now very cheap
Applications which could run locally are being deliberately designed to run on "the cloud", because privacy provides few opportunities to monetize user data.
Exactly why I use a legacy version of Photoshop which locks me into an old operating system.
For me it was being cheap the "creative cloud" versions have no features that I want as a photographer. (I'd have a different opinion were I a graphic artist). So why should I buy what I don't need
So you think the republican party is going to oppose this? Based on what, exactly?
The report talks about child abuse, not child porn, but that is a minor issue. I see a problem with false positives. Apple finds a false positive and reports it. The police do a detailed scan of the defendant's phone; they don't find evidence of child abuse or child porn, but they find evidence suggesting (but not proving) other crimes. They keep digging and digging...until they find evidence of a real crime.
A reminder for the younger, under Clinton, the government passed a law expanding surveillance because, dammit, terrorism was so bad. They promised it would only be used for terrorism, since it wss so extraordinary.
The government immediately began using it against drugs. When asked why, when they said it was only for terrorism, they replied, "Ha ha! Fooled you! The law doesn't say 'only terrorism'! "
The ha has were not spoken but only thought.
Anyway, these are the power hungry liars you are dealing with.
There have been people back in the get your photos printed at the wal-mart days that got in deep shit because their photo deck included a pic of their baby in the tub, and some prude saw it and called in the cops. Got their lives pulled apart over nothing.
This is going to be that on steroids.
I hope that the first person that is seriously harmed by this sues Apple for $100b and I hope I'm on their jury.
Can you elaborate on how you see that abusive conduct playing out?
This is so flagrantly in contradiction to things like the 4th amendment that it’s just stunning.
How can rational people not understand that things like this (and for that matter Biden’s permanent anti eviction thing) are against the law?
Obvious answer is that they do and they give zero shits about what’s right. And Apple had the gall to run ads recently regarding how committed they are to our privacy.
Theres nothing that makes otherwise normal reasonable people lose their mind and logical compass than when you cross crime with sex. In a rational world we'd end these destructive policies that treat unwell people as criminals and focus on prosecution of producers.
You might hope that the Left/Right divide on virtually any matter would carry over here but it seems the authoritarians in charge in both sides happen to be in agreement on this issue.
Exactly. And when you cross crime with sex. And CHILDREN.
Smug flip-phone user here. But I have a question.
Color correction—even slight, subjectively unnoticeable-except-by-experts color correction—can notably alter the histogram of a photographic image. For one thing, pixels near contrast margins can be altered so that previously-differentiated pixels become identical, at the cost of an almost-imperceptible loss of detail. If you do that, which anyone could do automatically to every image, what does that do to hash values? Do they become useless?
Yes, even a change undetectable to a normal human would prevent that automated hash detection from working.
In spite of that fact, there are lots of pedophiles in prison right now who got caught in this fashion.
If one ran a standard crypto-hash such as md-5 or sha-1 on an image file, changing a single bit would create a hash value that is far from the hash of the unaltered image.
In this child-porn detection task, that is a highly undesireable feature
That's true.
Nevertheless, lots and lots of people get caught in precisely this fashion.
If Apple does implement this system. I would conservatively estimate that thousands of users will get caught.
It's a general rule that you catch the stupid criminals, not the smart ones. But sometimes you can then get the stupid ones to rat out the smart ones.
Look up the specifications for PhotoDNA. That's what (generally) used for these hashes, and resists most manipulations.
What I'm more worried about, Apple is in some places saying they're using a hash like that, and in others saying they're using their "neuralMatch AI, an ML system trained on 200k images of CSAM"... that's obviously an entirely different ballgame in that false positives from petite 18yos could potentially screw a lot of people who can't affirmatively prove they're 18 when both the "AI" and human verifier thinks it's a minor.
SL,
Finally something I agree with you on. I also am a flip phone user.
As for your question. One should look up openly available image hashers and play with them. (Even a chapter of text is small in comparison, but you might want to play with text hashes for a start).
The hash should be very different even if you change a small number of pixels. Is there a way around it?
I suggest study the NIST documentation if you are super serious
There are hashes that are specialized for image recognition, that are much more resistant to small changes to the image. They're also more computationally expensive...
If they're going to run the hash computation locally, as they say, they're either going to use a less resilient hash, or really load down your phone.
Indeed. The whole point with the hashes used is that images that are close should produce hashes that are close. The goal is to induce hash collisions
Professor Volokh...Doesn't this entire question turn on whether the federal government role here? If the federal government "instigates" or "encourages" Apple, and Apple "engages in the search with the intent of assisting [law enforcement]", that crosses the line, right? Isn't that the crux of the matter?
What happens when the rest of the OEMs jump on board: Samsung, Lenovo, LG, oppo, etc.? Or Google?
It's an is/ought question.
Is may depend on the government.
Ought, this is a bad idea period. Though not too hard to understand.
Ok, you answered my question. We agree - this is a bad idea by Apple.
Right.
Child abuse and child porn tend to make for some pretty bad broad policies in criminal justice.
Easy for me to say - I don't have kids yet.
I have two adult children and I absolutely think it is a spectacularly bad idea, because it will never, ever be limited to solely child abuse and child porn.
I am perfectly fine with the death penalty for child pornographers.
Why is it that the government prosecutes those who view kiddie porn but not those who use illegal drugs?
It would seem to me that the same distinction between producer/dealer and user would apply in both cases and (arguably) one can become addicted in either situation.
We don't prosecute drug addicts -- not even to force them into treatment. And????
Even you cannot possibly be stupid enough to think that the government does not prosecute illegal drug users.
If you were innocent but had to be classed as a drug addict or a CP consumer and have the public know about it. Which one would you pick?
You are radically rewriting Ed's ridiculous comment: Why is it that the government prosecutes those who view kiddie porn but not those who use illegal drugs?
Other providers have been using similar systems for years, with no particular impetus I am aware of to expand it to other criminal activity.
Wait...what? Really? Can you give me an example?
Same here, C_XY
I don't see a basis to conclude that this is Fourth Amendment state action, FWIW.
Professor Kerr....Thx for the response.
I am still marveling at the monograph you wrote regarding 5A (Decryption Originalism). What a find.
I'd like a chance to study the work leading up to that conclusion. It seems like it might not be that open and shut an issue.
On the one hand, I glanced at that U Chi. prof's article about "jawboning" in the context of Facebook et al. and didn't find her theory convincing at all. I wouldn't find it any more convincing applied to this situation either. So in that sense, I'd agree with you.
On the other hand, if Apple turns out to have some kind of formal cooperation agreement with law enforcement—obviously it'd have to go somewhat beyond a generic policy statement about complying with valid law enforcement requests—then it doesn't seem like you can dismiss the issue so easily.
Given the amount of well documented public pressure applied (labeling Apple's policies in the public media as protecting terrorists and pedophiles, and constantly saying they're going to pass laws for force compliance), isn't it reasonable grounds to investigate whether they stepped over the line to impermissible coercion in private, or are you saying there is no such thing as impermissible coercion?
I don't know if that was aimed at me or Prof. Kerr.
Speaking for myself only, if you look at the "jawboning" cases—and it seems like they are few and far between with the doctrine hardly ever being invoked—they involve really specific facts that are lightyears away from what's going on with Facebook etc. Hopefully it's self-evident that politicians, even the Prez, can publicly criticize the business practices of private companies without magically creating state action every time they do.
I think investigation can be warranted in some instances, but there has to be a colorable basis for it. That basis doesn't seem to be present for Facebook et al. I don't know as much about this Apple situation to say though.
From the linked Apple announcement:
and
Assuming apple is being honest in the announcement, there is a simple way for an iPhone or iPad user to avoid this invasion of your privacy, don't use iCloud.
Here's a wild and crazy thought, don't buy products from a company that assists Communist China in violating human rights.
Boycott Apple.
I don't think that works--as the second snippet you included states, they're also doing on-device matching.
Aside from the concept that child abuse and child pornography is obviously bad, there is nothing good about this decision whatsoever.
I'd be willing to argue that the creation of real child pornography is child abuse. The creation of simulated child pornography isn't, and might even enable pedophiles to get their fix without harming anyone.
Fake images could probably severely weaken or completely destroy the market for actual CP far more effectively than our current approach. But its unspeakably controversial and a potentially career ending move to be caught within a mile of countenancing such a move since destroying addicts is more important than anything else include actually protecting children. This is real persecution, leagues beyond not having a cake baked for you or being called an incorrect pronoun.
I'm curious about the logistics of your suggestion.
What business would be willing to declare itself a creator of simulated child porn, and who will actually create it? I can't envision many people of normal persuasion who would be willing to sit at a desk and create such content, so is that business going to primarily hire sex offenders to do it?
since there are laws against it and people who have been prosecuted for it I'd assume its out there. Otherwise there wouldn't be any need for the former.
Well, there are businesses engaged in providing hosting services for individuals who create simulated CSAM.
And there are businesses (in China) who create child-like sex dolls, which also seems useful in providing an outlet. Though some customers have been arrested. I think the weight of the evidence comes down on preventing more hands on abuse by providing an outlet than creating more by encouraging it, since a lot pedophiles know how harmful it is and want to resist offending.
This is probably an unpopular take even on this blog, but, while people who only view the images certainly have deep-seated problems and need appropriate treatment etc., qualitatively they just aren’t as bad or culpable as those who actually produce the images in the first instance. Nonetheless it seems like there’s a rabid fixation and excessive devotion of resources to pursuing those in the former category. I guess I understand it from one perspective because those folks are the low hanging fruit in terms of identifying and convicting. (You just need to track the images, and then it doesn’t take much evidence to show they were accessible to the defendant.) But it still feels a little disproportionate. I’m not saying it’s a victimless crime either; I understand re-victimization can occur each time an image is shared. And of course I believe the hammer should definitely be brought down on those in the latter category.
"I guess I understand it from one perspective because those folks are the low hanging fruit in terms of identifying and convicting."
They're also the low hanging fruit in terms of planting evidence to frame people, not at all incidentally.
You act like this is not some common practice by the Deep State.
We do not live in a political thriller.
I'm not aware of full-on framing people, that's not to say it doesn't happen, but you raise a good point regardless. Another seeming waste of resources is the time and effort spent on what basically amounts to entrapment of people who aren't otherwise causing any trouble.
People seem to think this is just Apple. Many cloud providers who stores photos for you scans them against the NCMEC hash DB. For example, Google:
https://support.google.com/transparencyreport/answer/10330933
I can't find an easy reference for what Microsoft does, but it was heavily involved in developing the system in the first place, so it would be surprising if they don't.
IIUC, this has been the practice for some time on most of the larger image-hosting sites. If your photos are on someone else's computer, they generally have the right to look at them.
These images are known and the hashes and algorithms are very good at matching just those photos. Imagine identifying the Mona Lisa - not a derivative work or a parody - but any very close reproduction except for changes to color, size, rotation, etc. Computers (and humans) can make that match with nearly perfect accuracy.
It sounds like this feature will use the same algorithms, only it's now done by your phone and the phone tells Apple if it finds a match. Of course the algorithms could be extended to recognize other images, text, sounds, conversations, and anything else. If it recognizes and reports on certain programs, we might call it virus detection software. If it reports who you call or your credit card numbers, we'd call it spyware or malware.
If they're only looking for CSAM, that's one thing. If they're looking for this week's flavor of BadThink, that's something else.
What's good to cook your goose is good to cook your gander.
I suggest that folks read Apple's CSAM Detection: Technical Summary
https://www.apple.com/child-safety/pdf/CSAM_Detection_Technical_Summary.pdf
Their scheme is no simple scanning of content and will require the "cooperation" of the user's device including the storage and encrypted hash table on the user's device.
Could such tricks have already been built into Microsoft's devices? I don't know
Hey, I just want to point out that I expressly flagged MS and other providers above! Where do I go to get my apology now? 🙂
Here is a question for IP Lawyer.
Is the hash of an image a "derivative work" in the context of copyright protection? If not, why not?
Nope.
There are so many reasons why not. I can offer a couple.
Even if you accept that an image hash is a "derivative work" as that term is defined by statute—which it's not—a hash in essence is just a really long number. You can't copyright numbers, or at least one single number. Cf. Feist v. Rural.
Like I alluded to above, while a hash is literally a "derivative" of an image, it doesn't meet the legal definition set out in 17 U.S.C. § 101. Again, a hash literally "transforms" an image, but it doesn't "recast, transform[], or adapt[]" the image in a similar fashion (no pun!) as, say, an "art reproduction", which is what the definition requires. You can't somehow try to force the hash value to be viewable as image data; that'd be nonsensical.
I'd even argue that a hash doesn't literally transform or derive from an image's visual representation. What it derives from is the numerical values of the underlying pixels, not how they're represented visually.
Moreover, § 103 provides that "[t]he copyright in a ... derivative work extends only to the material contributed by the author of such work." But nothing of the original image is "contributed" to the hash because the process's output doesn't incorporate any of the input; it's just a completely separate numerical identifier. So again, even if the hash could plausibly be considered a derivative, there wouldn't be any copyright to claim in the hash on behalf of the original image author.
I'm sure creative (no pun!) folks out there can think of numerous (no pun!) other reasons, or maybe even disagree, but this is a good stopping point. Also, I hope you weren't asking because you have plans to enforce copyrights on any CSAM material 🙂
Thank you for the reply. Very understandable if Apple only creates a hash.
However, in reading further of Apple's Technical summary and the reviewers reports, Apple says that for each image, they create and encrypt the relevant image information (the NeuralHash and a visual derivative of the original).
So in fact a derivative image is created from the original image by Apple's process
No problem! These are great questions. I just hope my responses also rise to the occasion.
Let me briefly revisit what I said above in response to this new info you provided.
First, it seems fair to assume without researching further that the "visual derivative" you mention would legally qualify as a derivative work. But even in that case, I still see at least a few practical hurdles to any enforcement against infringement. For one, how would you—"you" in this case meaning the owner of a phone that's been through this scanning process—be able to find out which images had improper derivative works created? To my admittedly meager understanding, the process is completely opaque to the user, so there's no way to tell which images were in fact flagged, thus resulting in the creation of visual derivatives. So there's a problem with discovering the basic facts needed to make out an infringement complaint. I don't think it would meet the pleading standards to just say "well, on information and belief, some infringing derivative works must have been created, but I couldn't really tell you which ones." You have to be at least a little more specific than that.
Another practical hurdle is, if we take Apple's word for it that, barring the exceedingly small chance of a false positive, only true CSAM material will ever be flagged, then who in their right mind would ever want to bring an action alleging infringement of such images? Not only that, but you also can't even file an action without registering the copyright first. So that means someone would have to be courageous/foolhardy enough to inquire with the Copyright Office about registering CSAM material. Of course, if it's not CSAM, then no problem. But you still have the other practical hurdle I mentioned above.
Second, this being Apple, I'd expect they crossed and dotted their legal t's and i's already. While I haven't looked into it, it seems fair to assume that whatever terms of service, licenses, purchase agreements, etc. you enter into when buying the phone give Apple a license to do exactly this kind of stuff with your device content. So even if you manage to surmount all the practical obstacles, they'll just put up a legal defense that you already licensed them to make derivative works for purposes like this.
It'd be interesting to hear if anyone else has a different take, but that's my general impression at least.
Note that the technical assessments provided by Apple (https://www.apple.com/child-safety/) are by two (very well known) cryptographers, and not by cybersecurity experts.
The protections Apple offers are likely cryptographically sound, but can easily be circumvented by malware, ransomware or Apple itself. In my view, they are meant to placate the public without providing significant technical protections.
Maybe the law will come to the rescue.
"Maybe the law will come to the rescue."
Thanks, a bit of light humor about now was appreciated.
"The protections Apple offers are likely cryptographically sound, but can easily be circumvented by malware, ransomware ..."
The statement "can easily be circumvented by malware, ransomware ..." is with no justification just pulled out of a dark nether region.
None of the expert reviewers said anything of the sort,
This venue does not lend itself toward providing citations, as it locks out comments after one link. There are many companies that know how to break out of sandbox, even on Apple devices ... just look at the FBI's previous attempts to get Apple to unlock a terrorist's phone.
The expert reviewers presented by Apple were two cryptographers and an AI researcher. None of them claimed to have evaluated the system - just the cryptographic protocols.
If you are sincerely interested in understanding the attack surface of a modern smartphone, may I suggest you start with this SoK - dated, but still relevant:
https://petsymposium.org/2016/files/papers/SoK__Privacy_on_Mobile_Devices_%E2%80%93_It%E2%80%99s_Complicated.pdf
Thanks for the citation. It is an interesting piece of infrormation
There seems to be a lot of conflicting information from Apple themselves about exactly what they are doing. In some statements, they've claimed they'd be matching hashes, which indeed would only reveal the presence of contraband (or content deliberately designed to have a hash collision with it). But in other comments, they talk about using their "neuralMatch AI", a machine learning algorithm trained on a set of 200k CSAM images from the NCMEC database. This, obviously, is extremely problematic, because it will *not* reveal contraband, even with human confirmation, it will also reveal any adult whose body is substantially similar to a minor; see the 'Little Lupe' case where a pediatrician expert witness swore on the stand against a CSAM defendant the 19 year old was not even a teenager; because a small chested, short in stature, petite teenager of 18-19 cannot be distinguished from a minor by humans, and I would have extreme doubts about an ML algorithm doing better.
So what happens when this neuralMatch system identifies what it thinks is a kid, a human can't tell any better because they're just as bad at in unless it's a baby or toddler? First, the irreversible stain of an arrest for a charge of possession, and then a hope and a prayer that all they had was a commercial production with a 2250 record or an actress willing to come to court to defend her fans like Fuentes.
One things for sure, your co-blogger Stewart Baker is probably whooping and hollering in celebration, and will probably pass out in orgasmic bliss if it turns out the government was successfully able to coerce Apple into doing this, then proceed to write all about how they absolutely can do that, and fuck your "4th Amendment", we've got pedophiles to hunt!
Although it's worth noting, drug dogs don't just alert on contraband either. They alert on plenty of false positives, and alert on a perception their handler wants them to. Despite overwhelming evidence to the contrary, the Court embraced the fiction that drug dogs "only reveal contraband". So even if Apple was coerced, if courts would accept such a fiction for drugs, they very likely would for CSAM too, even in the face of numerous proven false positives, because obviously CSAM is much worse than drugs, so should be afforded even greater ability to lie to protect the constitutionality of methods targeting it.
All good points.