The Volokh Conspiracy
Mostly law professors | Sometimes contrarian | Often libertarian | Always independent
The false narrative about bias in face recognition
The biggest beneficiaries of the narrative? Chinese and Russian technology firms
If you've been paying attention to press and academic studies in recent years, you know one thing about face recognition algorithms. They're biased against women and racial minorities. Actually, you've probably heard they're racist. So says everyone from the MIT Technology Review and Motherboard to the ACLU and congressional Democrats.
There's just one problem with this consensus. It's wrong. And wrong in a way that has dangerous consequences. It's distorting laws all around the country and handing the global lead in an important new technology to Chinese and Russian competitors.
That's not to say that face recognition never had a problem dealing with the faces of women and minorities. A decade ago, when the technology was younger, it was often less accurate in identifying minorities and women.
…
Two agencies that I know well—the Transportation Security Administration and Customs and Border Protection (CBP)—depend heavily on identity-based screening of travelers. As they rolled out algorithmic face recognition, they reported on the results. And, like NIST, they found "significant improvements" in face recognition tools in just the two years between a 2017 pilot and the start of operations in 2019. Those improvements seriously undercut the narrative of race and gender bias in face recognition. While CBP doesn't collect data on travelers' race, it does know a lot about travelers' country of citizenship, which in turn is often highly correlated to race; using this proxy, CBP found that race had a "negligible" effect on the accuracy of its face matches. It did find some continuing performance differences based on age and gender, but those had declined a lot thanks to improvements in operational factors like illumination. These changes, the study found, "led to a substantial reduction in the initial gaps in matching for ages and genders": In fact, by 2019 the error rate for women was 0.2 percent, better than the rate for men and much better than the 1.7 percent error rate for women found in 2017.
…
In short, the evidence about bias in facial recognition evokes Peggy Lee's refrain: "Is that all there is?" Sadly, the answer is yes; that's all there is. For all the intense press and academic focus on the risk of bias in algorithmic face recognition, it turns out to be a tool that is very good and getting better, with errors attributable to race and gender that are small and getting smaller—and that can be rendered insignificant by the simple expedient of having people double check the machine's results by using their own eyes and asking a few questions.
One can hope that this means that the furor over face recognition bias will eventually fade. Unfortunately, the cost of that panic is already high. The efficiencies that face recognition algorithms make possible are being spurned by governments caught up in what amounts to a moral panic. A host of cities and at least five states (Maine, Vermont, Virginia, Massachusetts and New York) have adopted laws banning or restricting state agencies' use of face recognition.
Perhaps worse, tying the technology to accusations of racism has made the technology toxic for large, responsible technology companies, driving them out of the market. IBM has dropped its research entirely. Facebook has eliminated its most prominent use of face recognition. And Microsoft and Amazon have both suspended face recognition sales to law enforcement.
These departures have left the market mainly to Chinese and Russian companies. In fact, on a 2019 NIST test for one-to-one searches, Chinese and Russian companies scored higher than any Western competitors, occupying the top six positions. In December 2021, NIST again reported that Russian and Chinese companies dominated its rankings. The top-ranked U.S. company is Clearview AI, whose business practices have been widely sanctioned in Western countries.
Given the network effects in this business, the United States may have permanently ceded the face recognition market to companies it can't really trust. That's a heavy price to pay for indulging journalists and academics eager to prematurely impose a moral framework on a developing technology.
To get the Volokh Conspiracy Daily e-mail, please sign up here.
Editor's Note: We invite comments and request that they be civil and on-topic. We do not moderate or assume any responsibility for comments, which are owned by the readers who post them. Comments do not represent the views of Reason.com or Reason Foundation. We reserve the right to delete any comment for any reason at any time. Comments may only be edited within 5 minutes of posting. Report abuses.
Please
to post comments
Ok. It works. But that is not my issue with it. I’d not like facial recognition because of privacy and government (and big business) over reach.
I see a pretty girl across a crowded stadium. Now, I have her phone number, her email, her health record, her school records, her address, her contact list, her shopping habits and preferences. I have her sexual activity from the smart mattress or from her smart watch tracking her bedtime activity, or from her smart TV recording her intimate conversations and movements in bed. I know where she is at all times.
Is that OK with you, lawyer punk ass bitches?
Imbecilic Incel.
The fake bias allegation comes from the high rate of crime committed by diverses. They commit 4 times the violent crimes, as measured by crime victimization surveys, the Gold Standard of crime measurement. They cannot use genetics as an excuse. Their criminality is explained by their bastardy rate. Diverses from intact families have low crime rates.
Stop being a denier, Queenie. You are a serial denier.
But that is not my issue with it.
Perhaps, but that is the subject of the piece you’re commenting on.
Molly,
If you really think that the US intelligence machine will ignore this technology you are terribly naive.
Don, she’s clearly agreeing with you.
We know the US oligarchs enriching themselves by making us miserable. I would like to know the names of the Chinese oligarchs. Kill them all and their families, down to the last kitten. To deter.
Nuttier than a squirrel’s wet dream.
Should the persuasiveness of this argument concerning bias and minorities be influenced by the point that it is being advanced at a remarkably White, archaically male blog?
Judging by a quick skim of the front page, the Lawfare contributors seem to be reasonably diverse.
It is being advanced at the Volokh Conspiracy, too. That is the White, male blog publishing these comments.
This also seems a reasonable opportunity to ask anew: What in the hell are the authoritarian right-wing stylings of Stewart Baker doing at a self-described “often libertarian” blog at a self-described libertarian website?
Sheepish conservatives masquerading in garish, unconvincing drag are among my favorite culture war casualties.
Can you imagine Rev. Kirkland getting a DNA test and finding out he was part Caucasian? I’d pay to see that.
Just like you’d pay to see a ‘rasslin match, a NASCAR race, a faith healing convention, a rattlesnake-juggling exhibition, a gun bash, a Morgan Wallen show, a Confederate memorabilia festival, etc.
Burn.
The punk ass bitch Dean was intimidated by thugs.
https://legalinsurrection.com/2022/01/profile-in-cowardice-georgetown-law-dean-bill-treanor-suspends-conservative-legal-scholar-ilya-shapiro/
Almost all talk about face recognition is false narrative. It’s just a technology to find and/or confirm identity of people. That’s all.
Every other concern about it is really a concern about the people operating it. Maybe you don’t trust them. Solve that by dealing with those people somehow. It’s a human problem, not a technical one. If you can’t trust people with face recognition, you can’t trust them without it.
Technology doesn’t get un-invented. And automatically identifying people is useful so face recognition is not going away.
This article’s primary complained of consequence (US companies ceding the market to Chinese and Russian ones) feels misplaced.
Imagine an argument ‘Our companies are ceding the torture market to [companies in foreign adversary]. We need to get back in the business of making torture implements to compete’. That’s about how this article hits me.
It may hit you that way, but you are burying your head in the sand about an inevitability
Imagine an argument ‘Our companies are ceding the torture market to [companies in foreign adversary]. We need to get back in the business of making torture implements to compete’. That’s about how this article hits me.
While there are some valid concerns about the potential (perhaps even inevitable) misuses of facial recognition technology, analogizing it to torture is about as dumb as arguments get.
I think you’re missing the point. The blog post insists we have to do it to stay competitive, as if that was a sufficient justification for doing something. But it’s not a justification for doing something, as the torture analogy makes clear – some things simply aren’t worth being competitive at. The analogy is about the nature of the argument made – ‘should do x to be competitive’.
I feel no need for the US to be competitive in something that shouldn’t be done. So first, prove pursuing facial recognition is a worthwhile endeavor if you get it right. And not just by cherry-picking some potential goods, but also frankly confronting all the potential bads.
I find it difficult to assess who to believe here.
We have a lot of technical people who say the bias thing is real, and here we have a law professor who says it’s all hokum.
If it was real in 2017, improved significantly since, but the reputation gained in 2017 has persisted and drowned out corrective voices, I could understand that. But that would be a more measured way of looking at things that attempts to explain why the contrary view at least has some basis, different from saying it’s all hokum.
It’s the view that it’s all hokum that makes me suspicious. The idea that technology that’s potentially troublesome in its own right introduces racial bias is a suspiciously believable meme. But the idea that it’s all hokum made up by the left and libertarians is also a suspiciously believable meme.
Is he a law professor? I thought he was a right-wing authoritarian in private practice after working in Republican administrations.
I could be wrong.
ReaderY, note this from Baker:
It did find some continuing performance differences based on age and gender, but those had declined a lot thanks to improvements in operational factors like illumination.
I read that to mean that to be accurate AI facial recognition requires controlled illumination. Law enforcement may indeed try to use it that way sometimes. But it will also be trying—probably most of the time—to deal with pictures collected in the wild, from security cameras in mixed lighting, etc. It is unclear to me if Baker’s comment should be taken as anything more than trivially helpful.
Ah, didn’t catch that. You’re suggesting this might be somewhat of a Volkswagon situation. You can improve the test results by making the test conditions more ideal.
Reader,
SL is making an excuse for the apparent disagreements out there. Baker is saying the the technology is rapidly improving and the the Russians and especially the Chinese are deeply invested in having the technology become highly accurate as a matter to controlling their respective populations. Don’t count on error rates being high even with poor lighting for more than a few years.
When you’re taking Lathrop’s interpretation of something seriously you’ve screwed up.
I read that to mean that to be accurate AI facial recognition requires controlled illumination.
Not surprisingly, you read it wrong. The improvements in question relate to taking illumination into account and compensating for it in the processing algorithms, not changing/controlling the lighting conditions.
People love the word bias, without thinking about what it means.
The algorithms are not biased ‘against’ minorities, as they don’t pick the black guy out of the lineup when it is looking for a white guy or an asian guy.
The algorithms might have less precision within populations in which the training dataset has fewer examples, or poorer image quality. Lack of precision would manifest as finding a black guy when it goes looking for a black guy, but at lower confidence that it is the correct black guy being sought. The precision is lower, but there is no bias.
It should be simple enough for an AI to ascribe a confidence probability based on how many or few images it perceives as similar to the one being sought.
Bias, when it comes to that which we think of as racial (ethnic, gender etc.) bias is exactly as the engineer or scientist defines bias. The measure being employed is systematically off the mark.
If there’s an accuracy issue with specific races, then that can be solved by adjusting the training data.
It may be “biased” in the same way a stopped clock is “biased” for/against a specific time.
Ben_, how can training-data adjustments compensate for no-data failures in key areas of images analyzed? Ambient lighting delivers conditions which exceed the dynamic range of digital media. That happens commonly. No adjustments are possible which will not deliver either data loss somewhere in the image, or false data in happenstance density sub-ranges which the particular image might not have fully utilized (the latter is better as visual simulation, but potentially troublesome if treated as accurate data). Even creating multiple images at successive EV levels, and stacking them digitally, can only be a means to optimize densities within selected density sub-ranges—typically for visual-simulation effect—at the cost of obscuring data and thus losing detail elsewhere in the image.
Digital images have density gamuts which bound the range of data which can fit in the image. Ambient lighting and varied subject matter present notably larger gamuts than can be recorded in full detail. Something has to give, and what gives is typically presented as loss of contrast, and thus loss of detail, somewhere.
Time-of-flight sensors are one way. There are other ways. It’s a straightforward problem.
If the data doesn’t have the resolution on some scale to produce good answers for some situations, get better data. Lighting can be adjusted and sensors chosen to provide the data needed.
Gasman, you are close to accurate. As professional photographers have known forever, darker skinned people are inherently harder to photograph than lighter skinned people. For many darker skinned people, the range of contrast between lighter and darker tones can be greater. A dark skinned Black person dressed in pure white clothes will test the skills of any lighting expert, and defeat many photographic attempts which rely only on available light.
The greater the tonal range which must be captured in a photograph, the more likely it is that key details will be missing—typically in either the deeper shadows or in the brighter highlights, sometimes in both. All photographic media, whether film or digital, have finite ability to capture contrast accurately. The greater subject matter contrast becomes, the more critical the ability to control illumination becomes. Images taken in haphazard illumination require expert real-time attention to camera settings to get the most out of the process, and may still fall far short of optimal representation of subjects.
Thus, if any traits linked to, for instance, sexual identity, or racial identity, happened to fall systematically in areas subject to worse resolution during sub-standard illumination, it is not unreasonable to suppose judgments with regard to those traits could be not only inaccurate, but systematically biased.
SL,
No argument that present photosensors have a smaller dynamic range than the human eye. That range can be extended greatly by using multiple sensors tuned to different light levels, I don’t think that CMOS with imbedded quantum dots is available commercially ( but then I have not looked in a few years) but it is certainly possible to design HDR surveillance systems for ambient lighting. The constituent technologies are developing so quickly that you should (and cannot) not rule out high accuracy systems in the near future that fly in the face of memes trotted out by facial recognition opponents
Nico, surveillance cameras have not previously been treated as opportunities to deploy cutting edge imaging technology. More the opposite.
No doubt advances elsewhere will eventually work their way into lower-end imaging technology. It is worth noting that fairly high end professional cameras, in the $4,000 – $7000 price range, have improved dynamic range performance only slightly during the last 10 years or so. The best of those are still challenged to optimize dynamic range for subjects showing simultaneously both the darkest shadow details, and the lightest highlight details. Methods to compensate typically involve workarounds like multiple digital exposures at different EV levels, and image stacking using density masking.
What can be done at far higher price points I doubt you can find out without a security clearance.
SL,
The matter is really not so much the price of the high end cameras but the development of the CMOS sensors. Presently, the efforts have been to make the pixels smaller to that in a 35 mm full frame sensor one can have 40 to 50 MP. The low light performance has improved dramatically as one can have effective ISO values 10 x greater than 15 years ago (up to 100K ISO equivalent). None the less, these CMOS are using only 1/3 of the sensors for BW images. One could have two sensors, one without a filter to have good imaging in the near IR.
There is still lots of room for improvement and the giants of the camera industry in Japan are pushing forward.
Maked improvements in the software and its associated processing technology in China will open up a new, large opportunity for marketing to the law enforcement and intelligence markets.
The low light performance has improved dramatically as one can have effective ISO values 10 x greater than 15 years ago (up to 100K ISO equivalent).
Nico, that is not practical reality. It is only partly reality, but with admittedly-useful behind-the-scenes trickery added, to deliver both image quality improvement and marketing advantages.
First, 15 years ago a 1600 ISO was a useful and practical maximum ISO setting for many applications, including low light applications handled carefully. Photographers measure exposure multiples in f-stops. Compared to 1600 ISO, a 100,000 ISO setting would not be a 10x increase, but closer to a 6x increase on the f-stop scale.
But no commercially available camera available today is regarded by professional photographers as a practical success for almost any kind of work above maybe 24000 ISO. And at that elevated ISO setting, there are few practical applications indeed—too much digital noise, and too little resolved detail preclude use of most such images, except maybe at the smallest sizes online.
Apparent improvements in dynamic range you suppose you see are partly real, but also to a larger degree owe their appearance to introduction of sophisticated electronic noise-masking algorithms operating behind the scenes. Earlier digital cameras did not feature such successful algorithms. Later ones have them. Those noise-masking algorithms greatly reduce the most-obtrusive symptom of low light dynamic failure—electronic noise in the image—but do so by digitally smearing noise the sensor does still deliver into less obtrusive, smoother contrast transitions. That smoother treatment obscures lost detail.
Thus, today’s high-ISO images made in low light deliver notably less apparent noise than previously seen at the same ISO settings, but do not deliver a lot more resolved detail. Maybe a little more. The overall appearance is thus an improvement, but not an improvement on anything like the scale which even a quadrupling of real ISO sensitivity would imply.
Most professional camera consumers today are not in fact professionals. They rarely or never need the full resolution their camera sensors deliver, because they rarely or never enlarge their images to maximal dimensions. So a smoother-looking image looks to them like better ISO sensitivity, and that they can celebrate right along with the marketing departments which sold them the cameras.
Plus which, the higher resolution now available when there is light enough to support it is a major real advance. Photographers who need to enlarge full-frame images to 40- or 50-inches wide, and who have skills, tripods, and lenses necessary to make images sharp enough to withstand that degree of enlargement, get those benefits. The high-resolution benefits are also helpful for shooters who often need to crop subject matter, such as wildlife photographers.
Speaking for myself, and my newest camera, which is a performance market leader, I am notably happier at 1600 ISO than I was 15 years ago. I am still trying to prove to myself whether 3200 ISO is useful for my typically large prints. I am happier using that setting for smaller work than I would have been with an earlier camera. If I ever have to go back into a high school gymnasium, to take low-light shots of basketball games, with my present advanced camera, I would consider trying 6400 ISO, to see if it worked. That is as high as it will ever go. By my estimate, not quite twice the real low light sensitivity I had 15 years ago.
By the way, systematic comparative image tests—made under closely managed conditions of subject matter, distance, and illumination—can be found online. Those tend to confirm what I tell you. But many may have trouble interpreting what they see. Opposing tradeoffs between digital noise levels, and the amount of visible micro-contrast delivered, among various sample images, made at various settings, and sometimes by differing equipment, can make purely pictorial results hard for less-experienced observers to compare accurately.
The price points will end up being lower than today’s prices and one does not need a security clearance to know the basis of what is being done. The critical question is whether there is a sybstantial market segment to be opened.
Lawfare is filled with traitors and enemies of the Republic.
The fact that they are getting away with their coup reveals how dangerous these times have become. The federal class is unaccountable to anyone and they have now gotten away with the ultimate betrayal.
Hopefully they will find justice in this world.
This message brought to you by your local John Birch Society!
So if facial recognition declares that Sri Srinivasan is a Black woman, can Biden nominate Sri for the Supreme Court or not?
Why do clingers have such profound problems with humor?
What’s real is the need to update bathroom sink sensors to see black folks’ hands.
What’s real is the need to update bathroom sink sensors to see black folks’ hands.
You’re claiming that the infrared radiation given off by “black folks’ hands” is somehow different from that given off by other people’s hands?
Baker entirely misses the point. The question is not how well the tool works but whether it’s something they should be doing in the first place. I oppose the creation of a surveillance state. This is not something I want the US government getting better at. Let China and Russia be the ones to develop this skill set.
Don’t worry.
They will and sell the technology to the unscrupulous here.
It’s just like Pegasus
All claims of bias based on disparate impact are false, and we have known this for a long time, thanks to Thomas Sowell (see The ‘Disparate Impact’ Racket among other brilliant writings on the topic).
Late comment, but I came looking for the data you claimed supported this post since I had, apparently mistakenly, assumed that while I might disagree with you ideologically, you weren’t a deliberately misleading about factual data. I went to correct someone about facial recognition race effects, followed your links down to the data, only to find your article was extremely misleading, and you had conflated improvements in sex differential with race differential.
NIST IR 8280 (2019) makes it clear racial bias has not been resolved.
https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf
What a disgrace.