Facial Recognition Programs Are Getting Better at Recognizing Masked Faces
Time to add a hat and sunglasses!

Pandemic-wear face masks may not be quite as effective a barrier to surveillance as we hoped. While fabric face coverings certainly pose challenges to facial-recognition technology, U.S. government researchers say that improving software specifically intended to account for obscured features has reduced the error rate and made it easier to identify people whose faces are partially concealed.
So, you'll need to add sunglasses and a hat to your privacy-enhancing ensemble to confidently thwart the best efforts of the snoops.
"A new study of face recognition technology created after the onset of the COVID-19 pandemic shows that some software developers have made demonstrable progress at recognizing masked faces," the U.S. National Institute of Standards and Technology (NIST) announced this week.
The new report follows up on earlier research into pre-pandemic technology that found face masks to be effective at thwarting facial recognition efforts.
"How well do face recognition algorithms identify people wearing masks?" NIST asked back in July. "The answer, according to a preliminary study by the National Institute of Standards and Technology (NIST), is with great difficulty."
Since the earlier investigation, though, a new generation of facial recognition software has become available. The new technology is designed to be implemented in a world in which people's lower faces are routinely covered by fabric masks, obscuring many of the features that software would traditionally compare against stored images in order to identify subjects. The new systems "are able to recognize masked people by getting enough key points from just the eyes and nose," the South China Morning Post reported earlier this year.
They appear to do so pretty effectively, too.
"Some newer algorithms from developers performed significantly better than their predecessors. In some cases, error rates decreased by as much as a factor of 10 between their pre- and post-COVID algorithms," according to Mei Ngan, one of the authors of the latest NIST study. "In the best cases, software algorithms are making errors between 2.4 and 5% of the time on masked faces, comparable to where the technology was in 2017 on nonmasked photos."
As with the earlier study, NIST researchers applied digitally generated face masks over photographs in order to assess the new technology. They performed one-to-one matching, in which images are compared to stored photos of the same people—as is done at checkpoints or to unlock smartphones. A test of far more challenging one-to-many matching of the sort conducted when surveillance images of crowds are compared against large databases is planned for the future.
While researchers found that facial-recognition technology is improving in its ability to identify people through face masks, it still has limitations.
"When both the new image and the stored image are of masked faces, error rates run higher," they noted. "With a couple of notable exceptions, when the face was occluded in both photos, false match rates ran 10 to 100 times higher than if the original saved image showed an uncovered face." That's probably because fewer clear facial features yield fewer points of comparison.
They also discovered that, for uncertain reasons, the color of masks matters; "red and black masks tended to yield higher error rates than the other colors did."
Unsurprisingly, larger masks that obscure more facial features result in higher error rates than smaller masks that cover only the nose and mouth. "Continuing a trend from the July 2020 report, round mask shapes—which cover only the mouth and nose—generated fewer errors than wide ones that stretch across the cheeks, and those covering the nose generated more errors than those that did not." That makes sense for technology that is designed to accommodate masks by comparing details of the upper face.
"Prescription-type clear-lensed glasses will generally not be a problem for facial detection and recognition software. The key eye details are still visible," Cole Calistra, CTO of U.S. facial-recognition company Kairos, pointed out in 2015. "The best algorithms in the world still have difficulties, however, when people are wearing dark or shiny sunglasses, that effectively hides the pupils of the eyes. This problem is particularly compounded when the glasses obscure and hide the distance between the eyes, which is a key part of many algorithms."
Technology has advanced since then, but cameras still have to be able to see at least part of a face in order to match it against a stored image.
In China, where surveillance technology is eagerly deployed by the government, Hanwang Technology Ltd. moved early to tweak its software to accommodate faces masked against COVID-19. "But the system struggles to identify people with both a mask and sunglasses," Reuters reported in March of this year.
Throw in a hat with your sunglasses and mask and you have a combination that leaves even the best facial-recognition algorithms relatively little data with which to work.
Even more effective means are available for frustrating the snoops. But projectors that superimpose images of other people's faces over your own and makeup that distorts identifiable features are for the very privacy-minded and unlikely to see widespread adoption.
More practical are Reflectacles' glasses that enhance the already privacy-protecting qualities of shades with features that dazzle surveillance cameras. The latest versions are rather stylish and don't make wearers look like Elton John.
Despite advancing technology, face masks continue to have some anti-surveillance properties beyond their health applications. But for best results they need to be complemented by hats and sunglasses that give facial-recognition software little with which to work.
Editor's Note: As of February 29, 2024, commenting privileges on reason.com posts are limited to Reason Plus subscribers. Past commenters are grandfathered in for a temporary period. Subscribe here to preserve your ability to comment. Your Reason Plus subscription also gives you an ad-free version of reason.com, along with full access to the digital edition and archives of Reason magazine. We request that comments be civil and on-topic. We do not moderate or assume any responsibility for comments, which are owned by the readers who post them. Comments do not represent the views of reason.com or Reason Foundation. We reserve the right to delete any comment and ban commenters for any reason at any time. Comments may only be edited within 5 minutes of posting. Report abuses.
Please
to post comments
U.S. government researchers say that improving software specifically intended to account for obscured features has reduced the error rate and made it easier to identify people whose faces are partially concealed.
Any explanation for why exactly government researchers are looking into this matter? Are they also looking into the matter of how effectively implantable tracking devices work and whether or not people will accept them as long as they're told they're part of a vaccine against a pandemic? Are they also looking into the question of how many Jews you can fit into a Volkswagen?
Additionally:
Exactly what was the error rate prior to the claimed improvements and by how much was it reduced.
As is, the statement is entirely devoid of any real meaning.
If the error rate in identifying partially concealed faces was 98% and they reduced the error rate to 96% that's not exactly cause to be concerned.
We will know her by the phone she carries!
I am now making extra $19k or more every month from home by doing very simple and easy job online from home. Abg I have received exactly $20845 last month from this home job. Join now this job and start making extra cash online by follow instruction on the given
website........ Visit Here
Make 5000 bucks every month... Start doing online computer-based work through our day37 website and start getting that much needed extra income every month... You'll get trained by us, no prior experience needed...
Find out more about it on following address---> Home Profit System
Making money online more than 15$ just by doing simple work from home. I have received $18376 last month. Its ain easy and simple job to do and its earnings are much better than regulafr office job and to even a little child can do this and earns money. Everybody must try this job by just use the info
on this page.....work92/7 online
"They also discovered that, for uncertain reasons, the color of masks matters; "red and black masks tended to yield higher error rates than the other colors did."
----Tuccille
Google just fired one of their most renowned AI experts, who famously criticized their AI for being so inaccurate with darker skin tones. It could be for the same reason AI has trouble with darker masks.
"Timnit Gebru, a well-known critic of AI, says she was told not to publish research; Google says the research didn’t meet standards
----WSJ
https://www.wsj.com/articles/prominent-critic-of-bias-in-ai-says-google-fired-her-after-research-dispute-impolitic-email-11607037474?
The progressive hypocrisy here is mind-blowing.
From AGW to vaccines and disputed election results, the left is always criticizing people on the right for refusing to believe the experts, but when Google is confronted by their own staff for the apparent racism in their AI, what's the first thing they do?
Suddenly, they're all about denialism. They question the validity of the science.
The progressive hypocrisy
herein general is mind-blowing."My body, my choice" applying only to abortion springs to mind, as does the recent spate of politicians flouting the pandemic "safety rules".
I am now making extra $19k or more every month from home by doing very simple and easy job online from home. I have received exactly $20845 last month from this home job. Join now this job and starting making extra cash online by follow instruction on the given website……. USA ONLINE JOBS
The data set is wrong because of racism! To many wipipo are reading and writing on the Internet compared to BIPOCs.
You can't have it both ways, Ken. Either Google is too woke and discriminates against white people, or Google is ignoring apparent racist bias in their algorithms. Which is it?
You can’t have it both ways, Ken. Either Google is too woke and discriminates against white people, or Google is ignoring apparent racist bias in their algorithms.
We aren't talking fundamental physics here. Why exactly can't both be true?
It doesn't make any sense.
He's just throwing shit on the wall to see if something sticks.
No, he's right. No one in history has ever demanded a behavior from someone else and then not met those standards themselves. He must pick only one to criticize and thus address only the purported philosophy, because it's impossible that the behavior is not a direct result of that philosophy.
/s <- for the dense people who might need it
Actually, it's entirely possible for Google to be both too woke and ignoring bias in their own algorithms. I'd say it could be due to incompetence. It doesn't need to be on purpose. For both things to be true, it certainly doesn't require them to intentionally write biased algorithms--It may just be that darker features are simply harder for the algorithm to read from the media they're using. Like the report said, they have a harder time reading faces wearing black masks, too, rather than lighter colors.
The biased outcome of those AI algorithms is what it is regardless of whether their intentions were otherwise. It is entirely possible to both be a social justice warrior and write an algorithm that proves to be racially biased in testing. That's part of what quality control is about. You don't have to intentionally write code that gives you bad results. It could be that your software doesn't perform within acceptable parameters regardless of whether you were really trying.
And in the meantime, when Google is being charged with bias, they start attacking the science--on what appears to be an eminently falsifiable test. Does the algorithm demonstrate a higher rate of false positives with darker skin tones or doesn't it? Google sounds like climate change denialists, anti-vaxers, and people questioning the results of the election here. And this lady appears to have been fired for talking out of school rather than for being wrong.
"It may just be that darker features are simply harder for the algorithm to read from the media they’re using."
The problem may not even be in the AI itself.
As I understand it, the way the facial recognition works, is it builds a 3D wire frame map of the face from the 2D image data and the "facial recognition algorithms actually run on a wire frame data.
Dark skin tones will have less contrast with shadows, making the step of building a 3D wire frame map of the face from a 2D image significantly harder.
For the same reason that Volvo produces an ad excoriating the wage gap, and then when a wage gap is pointed out within their own corporation, they become vociferous libertarians, attributing the gap to choices and a multi-variance of datapoints such as hours worked, time off to have children etc, which perfectly explains the statistical anomaly.
And maybe they're right about that!
Most of the time, people don't care whether they're right or wrong.
They pick a side, and mostly it's just because they're trying to be fashionable.
They probably are, which means they should shut the fuck up about their wage-gap virtue signaling.
FFS “We all know why Joe Biden is rushing to falsely pose as the winner, and why his media allies are trying so hard to help him: they don’t want the truth to be exposed” ~ toddler trump BYE DON…………………………………………………………. READ MORE
Throw in a hat with your sunglasses and mask and you have a combination that leaves even the best facial-recognition algorithms relatively little data with which to work.
Or just convert to Islam, if you catch my drift.
Or Sikhism. Then you can carry a knife everywhere.
How would carrying a knife disrupt facial recognition ?
You wear it on your face. Do I have to think of everything?
Great and since you promoted the far left Biteme and his band of merry sociopaths to lead us I am sure there will be no civil rights violations reported by the media.
Good to know that, in this time of massive deficits and costs relating to the pandemic, the government can still find more ways to waste money. Have they done research yet on how ventilators affect facial recognition software?
i'd rather be recognized.
I think your daisy dukes will ensure that is the case.
today I wore pants to the office and changed into shorts when I got here.
As long as your user can recreate you on a new system after being derezzed, it would seem that it could pay off pretty well.
request access to clu program.
How does it do with an entire crowd wearing full face masks of Biden?
None of these countermeasures matter. If authority needs you to have been somewhere at a certain time, keystrokes are made and it’s a done deal. Same with DNA. What are you going to do, bring a forensic/computer expert with you to court?
It’s coming sooner than you think.
"In the best cases, software algorithms are making errors between 2.4 and 5% of the time on masked faces, comparable to where the technology was in 2017 on nonmasked photos."
I don't believe a word of this. Not one.
I get paid more than $120 to $130 per hour for working online. I heard about hol this job 3 months ago and after joining this i have earned easily $15k from this without having online working skills. This is what I do..... Just Click Here
Operative phrase: "In the best cases" In other words, in the cases least likely to occur in the real world.
'Better' than what baseline?
Even assuming a computer can tell with 100% accuracy who you are by your face, which they absolutely can not do, that wouldn't transfer over to a formless mask.
It's carte blanche to pull anyone out of line and harass them 'because they got a mask match', devoid of any whiff of science.
Hey, photo id does the same thing. Get one and identity has become 99% pre-approved no matter who he may be in fact. And once returned to its holder, there is no paper trail much of the time.
My Boy pal makes $seventy five/hour at the internet. She has been without a assignment for six months however remaining month her pay have become $16453 genuinely working at the internet for some hours. open this link....... Here is More information.
How about people who want to be identified by surveillance as ... the aforementioned Elton John? How well can anyone exploit the system by developing groupie body parts and features? Can you use designer blurs with genius makeup to distort your face in the infrared range?
Again, it's Sunday morning and I have to wonder why the trolls are here rather than in church?
Are they afraid of the COVID? Or will they finally admit they've replaced Christ with trump?
READ MORE