The Volokh Conspiracy
Mostly law professors | Sometimes contrarian | Often libertarian | Always independent
Clearview AI, the First Amendment, and Facial Recognition
An interesting commentary by Clayton Kozinski (Lehotsky Keller LLP) on the lawsuit in which the Duke First Amendment Clinic, Jane Bambauer, and I filed an amicus brief (which unfortunately didn't persuade the judge); here's the opening:
The conversation about facial recognition technology typically centers around privacy. But an ongoing lawsuit in Illinois shows that it has just as much to do with free expression.
Clearview AI is the defendant in ACLU v. Clearview AI. It produces powerful facial recognition technology used by law enforcement across the country. Like all facial recognition software, Clearview's is powered by faceprints.
The Illinois Superior Court recently rejected Clearview's motion to dismiss argument that the Illinois Biometric Information Privacy Act (BIPA) impermissibly infringes its First Amendment rights. BIPA prohibits companies from collecting "faceprints" — geometric measurements of facial dimensions — without first obtaining individual consent….
Editor's Note: We invite comments and request that they be civil and on-topic. We do not moderate or assume any responsibility for comments, which are owned by the readers who post them. Comments do not represent the views of Reason.com or Reason Foundation. We reserve the right to delete any comment for any reason at any time. Comments may only be edited within 5 minutes of posting. Report abuses.
Please
to post comments
Facial recognition is a product. It should be subject to product liability. To deter.
There is a Free Speech right to receive information, especially in public, of public interest, such as crime, and not requiring a court order. Liability should come from misidentifying an innocent person as a wanted criminal. But eyewitness testimony is totally invalid, nearly worthless, and facial recognition is superior.
The judge is a corrupt, worthless, pro-criminal extremist leftist. She is in the mold of the Soros prosecutors winning elections. Chicago deserves its war zone level crime rate for tolerating such criminal lover judges.
Can a mask induce rge reconition of another person, and implicate them in a deepfake way? For example Trump was seen shoplifting underwear at Walmart? Deepfakes causing damage should be criminalized. Can a porn movie with the face of Elizabeth Taylor be an infringement? How would her estate recover payment for damages? The answer, there should be a law enforcing app. The lawyer profession is ridiculously behind the technology.
What I would like to know is - before the pandemic you could get pulled over and possibly arrested for disguising your face bu wearing a mask. Now of course, you get hassled for not wearing one. Facial recognition depends of actually being able to see a face.
Can we wear masks in public all the time after the pandemic?
Facial recognition can identify with the upper half of the face. It can identify watching your legs and the way you walk. Watrix claimed that 2 years ago.
Weird how someone who claims to hate lawyers is always looking for more litigation and more prosecutions.
I am your best friend in life, your only friend in life. Everyone wants you dead because you lawyers stink so badly. I want you better.
Dude's done some serious clerkage. ASG in the DeSantis administration?
Yes. Perhaps unsurprisingly, one of Clayton’s impressive clerkships was for a judge (Kavanaugh) who, like Eugene, clerked for Clayton’s father, Alex Kozinski, on the 9th Circuit.
If I were to manually go through thousands of photos, and use that information go develop an algorithm, or even write a book, on how to detect faces (which is stupid because such an algorithm is preprogrammed in people's brains, but whatever) that would be protected right? I'm just reading faces. And commenting on them.
So I suppose the theory here for the judge is using a scraper to get those faces is bad? How does one make that distinction?
I'm reminded of the mosaic theory in the Fourth Amendment.
It's the generation of the faceprint, without permission, that is illegal. It's that it automatically analyzes photographs to produce a unique faceprint that identifies a specific individual well beyond anyone's expectations in a public forum. BIPA equates that information to private information, much like a medical record. EV's brief fails to engage with these sort of facts, AT ALL. Which is probably why the judge didn't find it to be convincing. Putting your head in the sand with regards to the fundamental basis of BIPA and the subsequent distinctions from other speech regulations isn't a legal argument, it's a LACK of a legal argument.
"... identifies a specific individual well beyond anyone’s expectations in a public forum."
People think it's unlikely that their face can be recognized from photos they post on social media? What rock have they been living under?
A facial print is not that kind of recognition. I do not expect someone to be able to determine my fingerprint just because they saw my hand.
A "faceprint" from a digital image is a mathematically summary of certain highlighted pixels. It is not a fingerprint.
Incidentally: Seeing your hand can reveal your fingerprints.
It's a comparison, I didn't say they were the same thing.
Further to that, you article makes my point for me. No one has the expectation that your hand being viewed in public will result in the police creating a fingerprint and entering it into a database.
But they should.
If it's common knowledge that "putting my face, which the police are looking for, in a public photo" is a bad idea ...
What's the difference from "putting my fingers, which the police are looking for, in a public photo"?
I mean, yes there are differences. But are there *Constitutional* differences? Where in the Constitution are those differences?
But it's not common knowledge that a picture of your hand can derive your fingerprints so I think you are confused as to that being relevant to a circumstance where your zone of privacy is subset of the basic expectations people have. People do not have the basic expectation that their general appearance in public would result in any of this. You saying that they should isn't a good argument from your side as it is tacitly based on the fact that they DO NOT. In the law, the issue is what people's expectations ARE, not what they SHOULD BE. Caps for emphasis because there are no italics or bolding here, not to be rude.
MichaelP below made my point better than I did, so take a look there.
I think that if you are in public, anything seen there is fair game. If I see your face and generate a description of it, you don't get a privacy right to restrict my use of that description - even if the description of your face I generate is done using an electronic tool.
I know that some courts disagree; that does not mean they aren't wrong.
If I gather your hair in public, submit it to a 23-and-me like company, and then publish the results, is that okay because it happened in public? Because you just used an electronic system to perceive what was otherwise entirely in public? This is a genuine question. If not, why is it different here?
Or maybe does it make sense that if you want to take the hair I left in public and use it to sequence my DNA (or whatever that process is, I do not know), you first need my permission. That's essentially what BIPA says about computer-generated information-maps derived from that which is otherwise generally perceptible in public (voice maps as well, for example).
I don't see why not?
I seem to remember that police are allowed to do warrantless gathering of fingerprints or DNA samples by following people around, and collecting drinking glasses or trash (as long as it doesn't involve trespassing). I don't see how this would be particularly different.
I might be embarrassed by it, if my publicly presented genetic identity was somehow important to me and contradicted by the 23-and-me report (calling Sen. Warren to the white phone). But people can be embarrassed by many things they do or say in public, too. If you are truly concerned about it, don't go around in public without taking steps to prevent it.
Especially since this sort of thing is nearly impossible to prevent, and almost as hard to punish everyone that (can) do it.
I don't think most people would agree with you that a private entity should be allowed to publish your genome, merely because they picked your starbucks to-go cup out of the garbage, and I believe it'd probably even be a HIPAA violation. I didn't ask the hypothetical of if the police are allowed to gather evidence at the scene of a crime.
I'm not sure what steps you think anyone could do to take to prevent their DNA from being escaped in public (travel in a hypo-allergenic tube or something? it's a farcical suggestion) but I think it speaks to your understanding of the actual underlying circumstances, your ignorance of that reality and its implications, and how that leads you to the outcome you provided.
Also, I think such a scenario speaks to 'public disclosure of private fact'-type of an issue, which is one of the underlying reasons why publishing such information would be a HIPAA violation despite the fact that it was obtained from public (it's your private information, regardless of the fact that we leave trails of our DNA everywhere, and the law recognizes that the information DNA imports is of the most-personal kind).
Now, I'm not familiar with all of EV's work but, failing to see him make this same point against HIPAA or the privacy torts, I don't really see the consistent intellectual basis underlying the argument.
Many people have strange ideas about privacy in public - such as getting angry at people overhearing them when they yell at their phones in a restaurant. This doesn't mean they are correct.
Also, it is actually very difficult to get a full DNA sample from a hair or two, much less the few skins cells left of a drink cup pulled out of a trash can. That fact you seem to think it is easy says more about your understanding of what is possible than mine.
As for preventing it: If you are worried about leaving hair behind, wear a hair-net. If you are worried about fingerprints or skin cells left behind from your hands, wear gloves. If you are worried about the dead skins cells that invisibly flake off into the air, I'm going to laugh at your paranoid delusions - public places are so full of that sort of thing that you would never be able to get a sample.
If you decide to go full ignorant paranoid, you can always ruin the day of any super-tech bio-stalkers following you by scattering false data around. Simply collect the dust, hair, and leavings from public places and put it into a little baggie you use to disguise your own.
Finally, HIPAA has nothing to do with 23-and-me. Even I know that much.
A fingerprint is not like a "faceprint" that way. A faceprint is essentially one way for a computer to recognize a picture of your face; it typically does not have enough detail to reconstruct what people would think is a good likeness. People don't typically look at hands or fingerprints to identify each other, though, so I am struggling to understand your point.
There is also a distinction between what private companies do -- which is the question here -- and what government, and especially law enforcement, does. The latter is in many ways a more serious problem, but also easier to address because we have a lot of existing rules about what makes a sufficient predicate for opening an investigation, or arresting or indicting a suspect.
For private companies, I see this as essentially the usual US vs EU approach, with Illinois adopting a more EU-like approach. That's obviously not a constitutional argument that invalidates BIPA, and I don't find the First Amendment argument very strong either. I think a better argument is that a haphazard collection of differing state laws makes it unreasonably hard for anyone to understand what rights they do have, either in terms of privacy or data processing.
The point of a fingerprint isn't to reconstruct a "good likeness" of you, so I'm not sure what point you are trying to make in regards to the comparison, and if anything it seems like you are making MY point.
I'm saying that a faceprint is like a fingerprint in that it's effectively a unique identification of an individual that is derived from their general appearance but in a fashion that humans cannot do. So, while we can see faceprints and fingerprints, they are of no use to us. They are useful to computer systems to facilitate searching and in that sense I fail to see a meaningful difference between fingerprints and faceprints.
As to the rest of your post, I generally agree. I don't see the issue with a more EU-like approach, personally. To your point, I think BIPA can serve as a basis for establishing those rights, versus the absolutely void that otherwise exists.
Also, fingerprints are exactly used to identify people. What other use do they have to law enforcement? What other use do they have in private business where there only use is to identify you as a person entitled to access whatever system you had to scan your finger/hand in order to access.
I think you can make a case for BIPA being able to gather face scan data in public, but I think that right crumbles when they try to sell that data without obtaining permission. I’m sure they are going to argue that the faceprint database is transformational, but I don’t think copyright law applies here.
Clearview's product poses serious challenges to witness protection programs, does it not? Suppose a witness has a child who has been photographed before the need for witness protection arises. After the family has been relocated, a search can seek the child's face in post-location photographs, e.g., school class photos, or photos taken by other children, in which people are tagged. The same idea might be applied to the witness, or her/his spouse.
I've actually been wondering about the other side of that particular coin... the bulk of the FBI and DEA come from a few select schools. Seems that if you were running any sort of criminal enterprise where infiltration is a concern, investing in facial recognition for faces which have appears in specific towns and universities over the past several years would be worth looking into. A couple of agents go missing which is tied by to this type of tech, you will see a huge change in tone.
Concur with the concern, it's a legit thought experiment.
As a practical matter, I am less convinced that that Ivy-and-other-select-schools are the folks actually doing the infiltration. They're the puppet masters, not the puppets ...
There's an interesting sci-fi book, forget the name, where the government's panopticon is coming to fruition, with many cameras, facial recognition, license plate readers to track vehicles, even tracking to estimate where you are in a large building (bluetooth and wifi, both phone broadcast and use of the signals as a sort of clumsy radar locator of objects, this kind of complex signal analysis is a thing now.)
Anyway, if you wanted to know where anyone was, plug in their name, and the database tells you, live, where they are. Building, car, restaurant, out walking.
This also provides a time record of the past, which makes Eye in the Sky look like some kid's summer project.
And now a private company does it, and is making non-trivial headway? And wants to sell it to that very same government we don't want having that power?
These freedoms were understood in a context that such tracking is hard and labor intensive, be it government or a private eye.
Not type in a name and push a button and there soandso it, live.
This is an early warning of danger. Misuse of public data to hurt political opponents via doxxing and so on is already rampant and growing (e.g. use of public records on who signs a petition to get an issue on a ballot, to harrass and intimidate people to not do it.)
I said it six months ago. People like the Borg in Star Trek, and imagine themselves mentally tough enough to resist it. But what if this is how it starts, with panopticon and a social credit score to kick you out if you think wrong things*.
* As defined by pols on the take happy to promise anything in the direction the wind is blowing.
I don't know the solution. But I know the endgame that cannot come to pass.
Or, if it does, politicians should be the first to be tracked 24/7, and publically. Their position, who they call, where they go, where every penny transferred electronically goes.
The Tyrant King George III would have loved it all. Phone metadata itself would have allowed the easy crushing of the revolution, to say nothing of forced government encryption backdoors, to say nothing of phone companies building scan-and-repilort frameworks for things goverenment decided shall be illegal.
If you think that google (with access to almost everyone's photos, search histories, and a strong AI) and facebook (which is facebook) aren't already most of the way there, then you're probably kidding yourself.
Advertisers are yet another puzzle piece. They've done the legwork three letter agencies dream of -- scanning everything you do online and building a little AI profile pseudo-person so they can predict things you might be interested to buy, or (poorly) guess at political proclivities and start shoving "are you a patriot?" ads with pictures of Trump in front of your face.
How does this case show its a 1st amendment issue when the court rejected that argument? Seems incredibly specious to me, anyway.
"without first obtaining individual consent…."
Can't you read the sign?
I would expect this in every business that requires proof of vaccination:
"entry to these premises constitutes consent to be photographed for any purpose acceptable to the owner"
It seems to me that as technology improves, our Justice System is going to need to go through some major changes.
Identity, position, activity, financial details, and in many cases future intent will be assessable at a great distance with precision and positional accuracy beyond any reasonable degree of privacy. In order to be "secure in their persons, houses, papers, and effects, against unreasonable searches and seizures", we will need to create a legal fiction of "knowledge" for state actors such that they will require a warrant to "know" things that are plainly visible to their detection devices. There is precedent for this in Kyllo v US with warrantless thermal imaging of a house for drug search purposes.
This case of faceprinting falls in that category. While I might agree with Eugene that the First Amendment as it stands should preserve the right of individuals to use that information as they see fit, the fact that it was being used for police purposes changes things. The police should be bound by law to be "blind" to things that are not available to them without a warrant, and so that would justify restrictions on the state's ability to purchase Clearview's product.
These technological advancements create issues beyond state authority, and imposes a burden on the First Amendment even for private individuals. I've always been a First Amendment maximalist, but I fear that the Fourth (and the other implied rights of privacy) are quickly coming into conflict with the First, and the First might have to give way and have limits imposed with regard to private information.
"The police should be bound by law to be “blind” to things that are not available to them without a warrant,"
So a cop on the beat who sees a robbery should go back to the station and get a warrant before taking any action?
Really?
If every reasonably foreseeable cop is blind such they could not see the robbery, maybe? I'm not sure I follow your hypo. The hypothetical cop saw the robbery. What is he supposedly blind to?
I obviously was not clear enough. The issue is that detection devices have gone far beyond magnifying glasses and binoculars, and the need to have beat cops to actually witness the alleged crime.
There's a line to be drawn, and where it would be drawn is a legitimate question, but I think that we should simply set a presumption of a warrant requirement if you use any new detection device/technology, rather than the presumption falling on the warrantless side.
I agree, but resistance is futile. All these technologies will continue to get better and cheaper according to Moore's Law. Eventually, they will sell for 100 million devices per cent. Eventually, they will be so small that they can be distributed by the trillion by opening the window and letting them scatter like dust to every corner of the the world.
Only complete Luddism could stop it. Make research, manufacture, sale, or possession of anything electronic a felony. Short of that, resistance is futile.
I've included this unfortunate probable outcome. If some will have it, then all should have it, especially The People when watching the politicians.
The Tyrant Kings should not have this power they can trivially abuse against political enemies, but if it is inevitable, everyone else should be able to use it against them.
The process of analyzing and creating a faceprint is equivalent to the creative processes that unavoidably precede expressive activity.
That, at least, strikes me as far from self-evident. Does anyone even begin to understand the, "creative processes that unavoidably precede expressive activity." Are those process even alike for various kinds of expressive activity? Are they alike for all creators, including human creators with atypical cognition? How about all automated creators? All the same?
What is hard for me to understand is why EV confidently insists on staking out for 1A protection every kind of novel, non-human creation, before anyone even knows how they all work, or what the implications will be. The effort seems poorly thought out, and consequently unwise. Why rush to do it?
Gotta agree here. This is like saying "recording a license plate is equivalent to the creative processes that unavoidably precede expressive activity".
I mean, I can see that there's a mosaic theory-type problem. But make that argument. This one is ... not persuasive.
In more concrete terms: you can scan a crowd. You might capture both Proud Boys and Black Bloc ... from license plates of the cars they show up in, or their face prints in mixed crowd.
I'm not convinced that the "identity" aspect is the same as the "expressive" aspect in this hypo.
Why? Because EV doesn't have much to say, otherwise, and he's trying to stake a reputation as a 1A-maximalist scholar.
Looking to copyright, creativity is viewed as basically anything except rote copying. I think EV is trying to say that, undoubtedly what is occurring is within the vein of what the law considers to be the "creative process". Does the computer compare some elements to other elements of the photo, and then to other photos? EV is saying, great, that's essentially what the mind does as well.
I agree with you that to rest an argument so flatly upon what is a baseless equivalency doesn't serve the purpose EV intends it to and really only serves to open this up as a free-for-all.
While some folks are concerned about social media as a source—and they should be—probably the first places police will look for data sources will be driver's license photos, passport photos, and any photo databases already in the hands of law enforcement.
Also? Is it clear that the focus of concern should be on the creation side of the face-print question. How about the use of them? How can faceprints become most useful, if not by systematic law enforcement dragnets of all images published online? Are there legal implications for that?
I think BIPA posits that the issue with the faceprint isn't that they are created, it's that they are created to be used, and that their creation is itself a use of the underlying data, and BIPA also does regulate the use of information-maps derived from other data. If anything, this is what BIPA is great at.
Are there legal implications for warrantlessly exposing people to automated photo dragnets which are bound to deliver rouges-gallery-type photo lineups of multiple suspects whose published images look nearly alike to an algorithm?
More generally, is there reason to be cautious before awarding constitutional rights to automated processes? Seems like situations are commonplace where one individual's claim of rights competes with another's. Sometimes one claim has to be subordinated to the other. What purpose could be served by subordinating a marginal claim of rights from a natural person to a seemingly stronger claim of rights from an automaton? Adjust relative strengths of such claims to deliver an array of dilemmas.
Suppose x-ray or similar technology was oerfected to the point where hand-held cameras could show pictures of people under their clothes, what’s going on behind walls, etc.
Protected by the First Amendment?
It is, after all, “collecting and gathering information.”
If not, why do businesses get to protect trade secrets, but ordinary people don’t get to protect personal secrets? Why should the First Amendment so favor businesses?
I would say there is a slight difference, because currently things like x-rays require that the camera first send out a signal to penetrate the wall/clothing/whatever.
Cameras, on the other hand, are usually a passive instrument that merely records signal originating from the target (or reflected by the target from some third-party source the target knows about).
In my mind, that matters.
So, flash photography is not okay?
If there were no other light sources at all in the public place where this photographed person supposedly is, then flash photography would get into an interesting edge case.
But since I assumed we were talking about the real world, where there would already be light shining on the person (and they would know about it), I don't see how flash photography would somehow change the situation.
Toranth, most of the money made doing professional photography is earned by folks who are experts at manipulating the light, not by folks who are expert at manipulating the camera in existing light—although that latter group does include some of the most celebrated names in photography.
If your professionals are expert enough in the usage of visible light to penetrate privacy measures - clothing, doors and walls, etc - then I'm very impressed.
And no, I would not oppose bans on upskirt shots for the same sort of reasons.
X-ray are actually not the best examples of imaging non-visible parts of the EM spectrum. Basic IR cameras already can see much that is covered to the human eye. I would be surprised that specialized cameras with band pass filters could not be constructed that reveal much only relying as sunlight as the incident radiation.
Interesting problem - logged out and back in at Reason/Volokh on this thread, because a comment failed on 303 error, and Apple was not recognizing my face.