The Volokh Conspiracy
Mostly law professors | Sometimes contrarian | Often libertarian | Always independent
OK to Use Evidence Indirectly Obtained Using Facial Recognition Software
The results of facial recognition software might not be admissible evidence—but the police are allowed to use them to generate admissible evidence.
From People v. Reyes, decided last week by Justice Mark Dwyer (N.Y. trial ct.); seems quite right to me:
Defendant Luis Reyes is charged with Burglary in the Second Degree and related offenses. Defendant moves to preclude trial testimony stemming from the use of facial identification software to identify the perpetrator of the crime in question. Not in issue here is that the burglar could be seen in crime scene videos. The case detective was able to recognize defendant as that individual after he examined a police file containing photographs of defendant.
Defendant's complaint is that the detective retrieved that file because of an earlier "hit" on defendant obtained by analyzing the crime scene videos with facial recognition software. Defendant asks the court to "preclude the People's use of the results of any use of NYPD's facial recognition software." …
For current purposes the court assumes these facts. On September 29, 2019, defendant committed a burglary at 507 West 113th Street in Manhattan, entering a mail room there to steal packages. Defendant's actions were recorded by security cameras. The detective assigned to lead the investigation obtained the videos. He made stills from them and sent those stills to the NYPD Facial Identification Section ("FIS") for analysis.
Using facial recognition software, the FIS located a single "possible match": one of the burglar's pictures possibly matched defendant's mug shot. The FIS returned a report to the detective bearing a prominent statement that the "match" was only a lead. The report specified that the match did not create probable cause for an arrest, which could be produced only by further investigation.
The case detective therefore obtained defendant's police file. In it he found a copy of the same mug shot along with photos depicting defendant's distinctive forearm tattoos. After studying those photos, the detective again viewed the crime scene videos and recognized that the burglar indeed was defendant.
He next prepared a probable cause I-card bearing three photos of the burglar taken from the crime scene videos—photos which displayed, among other features, his tattoos. The I-card did not include any other picture or defendant's name. On October 14, 2019, officers recognized defendant from that I-card and arrested him. {[T]he three photos of the burglar that were the key component of the I-card were in no way products of the software analysis of the crime scene videos.} …
"[F]acial recognition" involves the use of software to analyze the front and perhaps side portions of the head of an unknown person, usually as depicted in a photo or a video still. The software measures the location and contours of facial features, including hair. It next compares the results with those for photos of known individuals—photos that are digitally maintained for these comparison purposes—to select any possible "matches."
The authorities can then investigate whether the individual or individuals in the selected photos could be the unknown person. The results can show, for example, that an applicant for a driver's license has had licenses under different names.
To the best of this judge's knowledge, a facial recognition "match" has never been admitted at a New York criminal trial as evidence that an unknown person in one photo is the known person in another. There is no agreement in a relevant community of technological experts that matches are sufficiently reliable to be used in court as identification evidence. Facial recognition analysis thus joins a growing number of scientific and near-scientific techniques that may be used as tools for identifying or eliminating suspects, but that do not produce results admissible at a trial….
Some uses of facial recognition software are controversial. For example, many people fear that employment of such software will erode First Amendment rights by permitting unfriendly officials to identify and take action against those who demonstrate against government policies. That concern is especially pronounced given the ubiquity of security cameras in many big-city areas.
It may well be that legislative action should be taken to curtail the use of facial recognition techniques for such purposes. And this court understands what a "slippery slope" argument is.
The court notes, however, that these and other common concerns about facial recognition techniques seem dramatically divorced from the use of those techniques to develop leads from photographs of people taken as they commit crimes. And such photos are often obtained from private entities, not governmental ones, who employ cameras precisely to secure their safety from criminals.
That is in fact what occurred in this burglary case. No reason appears for the judicial invention of a suppression doctrine in these circumstances. Nor is there any reason for discovery about facial recognition software that was used as a simple trigger for investigation and will presumably not be the basis for testimony at a trial ….
Editor's Note: We invite comments and request that they be civil and on-topic. We do not moderate or assume any responsibility for comments, which are owned by the readers who post them. Comments do not represent the views of Reason.com or Reason Foundation. We reserve the right to delete any comment for any reason at any time. Comments may only be edited within 5 minutes of posting. Report abuses.
Please
to post comments
It seems no different than asking a witness to look through a book of mug shots.
Right. Except you now have the idiot savant aka the personal computer to do the looking for you, at least the first cut. Makes it much faster. But I agree, don't see why this evidence should be suppressed.
I think we agree that the idiot savant's testimony is problematic, but also seem to agree that non-idiot testimony based on evidence found as a result of the idiot's activities is admissible.
IOW, a bot-match is inadmissible, but rational intelligent human investigation based on bot preliminary hints should be.
FTR, I'm not entirely comfortable with the implications of facial recognition technology, but as others have suggested this is perhaps the worst case to present against it.
Of all the possible cases to take against law enforcement use of facial recognition software, this had to be about the worst.
Unfortunately, it's now a precedent that will be (mis)cited as support for use of facial recognition in ever more questionable circumstances.
Why shouldn’t they use it?
If you're worried that they'll catch more criminals, then argue to change the laws so fewer things are illegal.
The answer is what Prof. Kerr, who writes here sometimes, calls equilibrium adjustment.
The basic issue is you don't want a surveillance society. But technologies can get you there and obliterate the Fourth Amendment, by making things that used to be hard to do really easy to do.
A great example of this is GPS tracking of cars. For the most part, with very minor exceptions, GPS tracking does the same thing a human cop can do by tailing you, i.e., show everywhere you go. So of course prosecutors argued from the start that GPS tracking was just like tailing a suspect; the suspect, after all, had no interest in privacy in what they were doing on public streets in front of everyone. But the courts eventually imposed stricter rules on GPS tracking than they impose on tailing people, because if you just apply the same rule, the effect is to place more and more people under surveillance because it isn't subject to the limitations (chiefly manpower in the case of tailing people) of earlier technologies.
And so it is with facial recognition. The police could, in theory, put up cameras everywhere and constantly recognize everyone's faces (at least when there isn't a pandemic and people aren't wearing masks). But that would be a surveillance state in the way that police simply relying on witnesses and doing their own undercover work in public places would not be. So it has to be limited in some way.
Repeal laws, reform police behavior.
(Also stop using buzzwords.)
I sympathize for a re-thinking of privacy in the modern world. But you can't fight that war on a micro-battle basis.
For 30 years, I have been trying to compose a right of privacy amendment as simple and concise as our 1st or 2nd amendments. I failed. I can't define privacy in less than 3000 pages. A fundamental part of the problem is that it requires treating information as property and applying common law property rights to that. It doesn't work.
Privacy is dead. The best we can do is to try to make lack of privacy apply to governments also. No confidential conversations. No caucuses. No use of a computer except on record.
Privacy is a value independent of law enforcement.
For example, my personal life is above board and boring. I still don't want you spying in my bedroom.
Facial recognition is not related to privacy in any way.
it’s now a precedent that will be (mis)cited as support for use of facial recognition in ever more questionable circumstances.
The Court specifically said it was not admissible evidence, merely a tool to lead to (possibly) admissiblve evidence.
Can you tell us what "ever more questionable circumstances" you are referring to?
Playing Devil's advocate here...
I play backgammon as a hobby and have watched over the years as AI programs have increasingly dominated the discussion boards. In the early 90s they were a curiosity, a decade later they were accepted as mostly correct, but flawed in certain positions; today they are basically accepted as correct (with a few known exceptions and a few skeptical human holdouts)
If AI face recognition software becomes admissible, many jurors will accept it as "truth" because it comes from "the computer". We're not there yet with regard to admissibility. I wouldn't be so sure about a decade from now.
But computers are better at backgammon and chess than humans, it is likely they will likely be better at facial recognition too, at some point. If, in fact, they do become demonstrably better, would you rather have a jury look at photos and use their own judgment (with notoriously unreliable results, particularly in cross-racial identification, when the subject is accused of a crime) or a computer using an algorithm that has been shown to be accurate to whatever degree?
There are dangers of allowing the evidence to be admissible before it really is better than human comparison of photo to photo or photo to face and without proper study to ensure there isn't racial, ethnic, or other bias (i.e., allowing it to be admitted when it is as unreliable and as unscientific as bite mark evidence), and, as with DNA and fingerprint evidence, there is always the danger of misusing the technology, tampering with samples to get the "right" result, etc. Frankly, I think we need better procedures for all of that sort of evidence to ensure the analysis is truly blind to what some human wants it to be for good, misguided, or actively ill motives.
So, I think this was an easy case that there is no impropriety in using the technology to get a list of suspects or identify a particular suspect and I think it would currently be improper to allow the evidence to be admitted at trial. But the results in chess, backgammon, go, and driving, among manner other applications, shows that computers are better at some things than are humans and facial recognition might be one of them. And just as computers + humans are (last I checked) better than either humans alone or computers alone, this isn't giving facial recognition software the final say in who is investigated, arrested, charged, and convicted. And it seems the facial recognition software is going to be most useful for investigations.
In sure, I think your prediction is right and I think there are serious civil rights concerns that we should begin discussing in earnest, to try to stop abuses before they are common or, worse, it's too late.
I'm not sure that algorithms are better than people at the actual recognition; their primary value (at least for now) is in the ability to rapidly compare and "recognize" faces in databases. It's a great tool and I hope we use it more often. As for sanitizing the source of the photo to put in the lineup, that sort of thing happens all the time. I've had more than one case where the source/method that led to a suspect was privileged. The suspect's photo was put into a lineup with an appropriate number of fillers, the victims identified suspect, and voila, we had our guy. The original source/method never needed to be disclosed.
From my perspective, the libertarian complaining should focus on the prevalence of cameras, not the the use of the data by criminal investigators.
I also play backgammon, and noticed the same thing (AI).
Although, I'd say the old men in Bryant Part could take AI driven backgammon players. 🙂
Do we really want constant AI scanning of streets, or football stadia, to find potential crooks, even if it is just flagged for human review?
The problem for a panopticon is its potential for misuse. Do you want a database deep in an NSA building fed with continuous facial recognition of all Americans, so they know where everyone is at all times, but they promise to write down all accesses, complete with judicial approvals?
Yeah, I struggle to see an issue here. One could certainly argue that it would be overly prejudicial for a court to allow testimony that a computer matched the person in the video to this person, but that's not what happened here.
This is no different than police issuing a "Have you seen this person?" alert and then someone calling up and saying, "That picture looks like John Smith."
So... what was the point of the facial recognition if they just issued photos from crime scene to the patrol cops?
The detective gave a still photo from the crime scene to the NYPD Facial Identification Section (FIS).
FIS returned match, i.e. a name
The detective then obtained defendant's police file which had a mug shot and photos of the defendant's tattoos. (KEY NOTE: If the defendant did not already have a police file then the process would have stopped right here and the detective would have had to revert to other traditional investigative techniques.)
The detective then prepared an I-card with three photos of the defendant from the crime scene videos which displayed his tattoos.
The I-card did not include any other picture or defendant's name.
Other officers recognized defendant from that I-card and arrested him.
So overall, this court decision was appropriate.
The surveillance camera was not a govt camera; the computer facial recognition part was a facilitator and did not produce actual evidence; and the detective used established protocols and procedures.
Doesn't really seem to be responsive to Bubba Jones's question, which I share: it sounds like the "I-card" was prepared exclusively with information from the surveillance video, would have looked identical to how it would have looked without the facial recognition match. That makes the defendant's suppression argument even weaker, of course, but it does raise the question of why the detective bothered using the system in the first place.
Well...if you want to defeat facial recognition software, wear a mask. 🙂
Certain government registration requirements are accepted, i. e. motor vehicles, firearms. Can the government require tattoo parlors to register recipients and maintain photos of tattoos? Absence of compelling state interest or acceptable health, safety and identification registry?
This sounds like a case for Bob Loblaw: Why should you go to jail for a crime that some one else ... noticed?
Check https://www.browseinfo.in/odoo-integration/ now