Federal Agencies Can't Resist Using and Abusing Facial Recognition
Some agencies don't even know ways their employees are using facial recognition.

Already used to identify international travelers as they enter and exit the United States, facial recognition technology is becoming increasingly reliable and may soon become a regular feature of the travel process, say government researchers. But almost simultaneously come reports of haphazard implementation by federal agencies and real-life horror stories based on flaws in the technology. It all adds up to a cautionary tale about the uses and abuses of surveillance systems that can identify and misidentify members of the public as they go about their lives.
"The most accurate face recognition algorithms have demonstrated the capability to confirm airline passenger identities while making very few errors," the National Institute of Standards and Technology (NIST) announced this week. "The results indicate that several of the FR algorithms NIST tested could perform the task using a single scan of a passenger's face with 99.5% accuracy or better — especially if the database contains several images of the passenger."
One-to-one facial recognition systems, such as those you get while standing still so your image can be matched against a database, have been in-place for international travelers for several years, administered by Customs and Border Protection. NIST's study went beyond such checkpoint applications to investigate limited one-to-many uses, such as when crowds waiting to board their flights are scanned for the faces of those expected to be there to identify anybody who doesn't belong.
"NIST also looked at whether the technology could be viable elsewhere in the airport, specifically in the security line where perhaps 100 times more people might be expected during a certain time window," the report added.
While the NIST study suggests a remarkable degree of reliability, congressional testimony the same day the study was released illustrate the dangers when the technology goes wrong.
"[A]s I pulled up to my house, a Detroit police squad car was waiting for me," Robert Williams this week told the House Judiciary Committee's Subcommittee on Crime, Terrorism, and Homeland Security about his January 2020 arrest. "When I pulled into the driveway, the squad car swooped in from behind to block my SUV — as if I would make a run for it."
"I eventually got more information after the American Civil Liberties Union of Michigan connected me with a defense attorney," he added. "Someone had stolen watches, and the store owner provided surveillance footage to the Detroit Police Department. They sent a blurry, shadowy image from that footage to the Michigan State Police, who then ran it through their facial-recognition system. That system incorrectly matched a photograph of me pulled from an old driver's license picture with the surveillance image."
Which is to say, near-perfect reliability in laboratory tests sounds great, unless you spend 30 hours in jail because you're falsely tagged as a criminal by facial recognition technology. Real-world applications are messier than tests, and carry consequences in terms of liberty and, potentially, life when authorities respond to bogus alerts.
That's of concern given a recent Government Accountability Office (GAO) report that, of 42 surveyed federal agencies, 20 use facial recognition technology, including a hodgepodge of government-owned and privately contracted systems. Those 20 agencies have few safeguards in place, and most don't even know which facial recognition systems are being accessed, and for what purpose.
"We found that 13 of 14 agencies that reported using non-federal systems do not have complete, up-to-date information on what non-federal systems are used by employees," the report notes. Officials from one of the agencies told the GAO "that its employees did not use non-federal systems; however, after conducting a poll, the agency learned that its employees had used a non-federal system to conduct more than 1,000 facial recognition searches."
The implications and potential abuses are enormous since facial recognition technology is often used in a law-enforcement capacity that can result in the deployment of government force against the public—sometimes involving politically fraught concerns.
"Six agencies reported using the technology on images of the unrest, riots, or protests following the death of George Floyd in May 2020," the GAO reported. "Three agencies reported using it on images of the events at the U.S. Capitol on January 6, 2021. Agencies said the searches used images of suspected criminal activity."
It's frightening enough to have bad identity matches based on blurred photos in the hands of cops looking for shoplifting suspects and busting Robert Williams instead. But imagine them in the hands of federal agents who think they're defending the republic from one flavor or another of insurrectionists and under enormous pressure to apprehend enemies of the state.
The GAO recommends that the 13 agencies it found to have inadequate safeguards for facial recognition "implement a mechanism to track what non-federal systems are used by employees, and assess the risks of using these systems." Under the circumstances, that sounds like a barely adequate starting point for at least getting a handle on what the federal government is already doing in terms of using the privacy-threatening technology. Reining-in that use to minimize abuses would be the next step.
Going further, the Facial Recognition and Biometric Technology Moratorium Act of 2021, sponsored by Sen. Ed Markey (D-Mass.) and four co-sponsors would "prohibit biometric surveillance by the Federal Government without explicit statutory authorization." It would also withhold some federal money from states and localities that use the technology. Such legislation has languished in the past, but this week's congressional hearing, with concerns raised by members of both parties, demonstrates wider support for efforts to control government implementation of facial recognition and related biometric technologies.
But governments really like surveillance capabilities, and facial recognition is becoming more reliable and easier to deploy. It's difficult to imagine government agencies long denying themselves access to the ability to scan crowds and identify, however reliably, participants. If you're looking for protection against the spying eyes of the state, you're best advised to rely less on law than on sunglasses and a hat.
Editor's Note: As of February 29, 2024, commenting privileges on reason.com posts are limited to Reason Plus subscribers. Past commenters are grandfathered in for a temporary period. Subscribe here to preserve your ability to comment. Your Reason Plus subscription also gives you an ad-free version of reason.com, along with full access to the digital edition and archives of Reason magazine. We request that comments be civil and on-topic. We do not moderate or assume any responsibility for comments, which are owned by the readers who post them. Comments do not represent the views of reason.com or Reason Foundation. We reserve the right to delete any comment and ban commenters for any reason at any time. Comments may only be edited within 5 minutes of posting. Report abuses.
Please
to post comments
“George Orwell, your time has come and gone.” - Studs Terkel
Just keep hammering on the inequity of false alarms against BIPOC.
Oh, and wear a burqa.
I'm sorry, but how is a privacy issue when you're out in public and how is a bad photo worse than a sketch artist's rendering from a description? If you don't want LE to be able to do their job then state that (and accept the resulting increase in crime) don't make up vague unspecified abuses surrounded by hyperbolic language.
So you have no problem with someone stalking you as long as they do it from the sidewalk? You're okay with the idea that everything you do outside your own home (or even just visible from the outside) should be a matter of permanent and continuous public record? If you really believe that ... Well, maybe you do. But that would put you way outside the mainstream opinion of what "privacy" is.
By the way, the big difference between a bad photo match and a sketch artist's rendering is that we (and the cops) have an intuitive sense of the fallibility of the sketch. We (and especially cops and prosecutors) suffer from a cognitive bias that leads to unjustified confidence in computer-generated results. This cognitive bias has been documented in many different settings. Consider the situation above. Do you really believe the cops would have reacted the same way to an artist's rendering? Or would they have taken some extra steps to confirm the match before going in with guns drawn?
I do want law enforcement to find criminals. But I also want them to respect the civil rights of law-abiding citizens while doing it. Yes, that will make their jobs harder. But the increase in crime will be more than offset by the decrease in liberty infringements by the criminals-in-uniform.
+
So you're imagining personal surveilance drones for every individual? Otherwise "stalking me" and every moment in public being part of some "permanent record" is that same hyperbolic mischaracterization being deployed.
You do the same with the arrest when you declare the cops went in "guns drawn",which isn't in the article, apparently to make a point. Is it over aggressive, perhaps but overall you appear to be demanding perfection or nothing where there is plenty of middle ground for better all around.
You apparently haven't been paying attention. No need for personal surveillance drones when I can compile the same footage by jumping from camera to camera as you walk down the street. No, it's not hyperbolic - it's established technology that's been disclosed in open court. It's not even just government-run cameras, either. This scenario has been demonstrated in residential communities using police backdoors to Amazon Ring cameras.
Nice switch from facial recognition to surveilance in general. Again, there are issues but your answer is to throw it all out. Want to protect your business from theft, sorry LE might use that to protect someone else's property so you can't have that.
You want to argue LE has too much easy access to surveillance through backdoors or loose/non-existent warrants then make that case instead. But that's at odds with eliminating facial recognition which can use less surveillance to get to a smaller suspect pool and apprehension.
There is a trade-off here that neither side wants to discuss or act on honestly.
I realize this is willful on your part, but you're the one who brought up personal drones and that's what the point is. A place like London, with cameras everywhere, can go from "cops can see people" to "every person can have their precise movements tracked all the time" just like having a personal surveillance drone because of these advances in technology. It's WORSE than the drone, because it's pervasive and non obvious.
No expectation of privacy in public was a good thought when crowds were anonymous unless someone was doing something obviously abnormal. Sure, someone could take a photo of the crowd and if you're in it tough titties. So you have to really think whether you want to use it.
Being able to track anyone, anywhere, anytime, just on the off chance that they might one day commit a crime almost invites abuse. You know not just good actors acting in good faith exist in the government, and even people who THINK they're doing it for "good" will do bad things. And you are breaking laws all the time, so it isn't inconceivable a bad actor can target individuals with this sort of data and that technically true fact of their criminality, for instance.
That's the entire American justice system, at least as it was originally founded. Better to let a guilty man off than convict an innocent one. In this case, make the police do the work or subject all of use, everywhere, to constant and inescapable surveillance to make their jobs slightly easier.
Don't even need a police backdoor. Just look at Next Door. Many people will set up cameras to scan public property within eyesight of their property. They will 'come to agreement' themselves on which person on those cameras is a problem. They will send the info to the police - and implicitly will also signal that their local jury pool will find the one guilty and will protect the police from overreach.
It's not about Next Door either. It's just made it really obvious that a major function part of protections is to protect us from our own neighbors. Spontanous order is NOT an automatic protection from mob rule. The mob can quite easily self-organize.
A certain level of privacy certainly does exist when one is in public. This is why we are not required to carry federally-issued identification cards while walking on the sidewalk, and why a search warrant is still required for anything more than a pat-down, and why, beyond identifying one-self, one cannot be required to speak to the police. This is why San Francisco, Somerville, Boston, Portland, and other cities, have outlawed its use, and New York has outlawed its use in schools.
Law Enforcement's job is to work within the rules set up by civil government.
So this is reducing crime which had been always rising before this?
The question is whether law enforcement is acting responsibly. Saying “we arrest you because the facial recognition algorithm found a match” is unreasonable and hence needs to be stopped. To do so effectively and prevent abuse, it may be necessary to ban facial recognition altogether, even if that results in more criminals getting away.
"The most accurate face recognition algorithms have demonstrated the capability to confirm airline passenger identities while making very few errors,"
As the NIST article itself points out, it's the algorithm that has this capability, not the system itself. They didn't address the question of the cameras and the environment although they did point out that this makes all the difference in the world. Using crappy cameras for scanning crowds is not at all the same as using high-quality cameras and asking the subject to look directly into the camera. Having the "capability" of doing something is not at all the same thing as doing that something.
And I can't help but notice the accuracy they mention is based on the number of false negatives - people they fail to identify that they should have been able to identify - and they don't seem to mention false positives - the number of people they "identify" as somebody else. To judge the accuracy of something, you have to know both the false negatives and the false positives, otherwise you fall into the fallacy of the drug dog that never fails to alert at the presence of drugs by the simple expedient of never failing to alert whether drugs are present or not.
But governments really like surveillance capabilities
Did you really need to write any more than that?
I didn't even hit reply...
Squirrels!
Looks like govguns got the tech.
Isn’t it about time we do too?
We need to demand the right to record everything we experience or witness. It has to be an inalienable right.
All corruption, regardless of the source, requires secrecy, censorship and lies. It also needs to interact with us to coerce us.
Our recordings will expose and vanquish corruption.
Imagine the peace of mind knowing the corrupt fear interacting with us.
The way they use the technology at the international border is pretty amazing. I have Global Entry to make international travel easier. I recently took my first international trip since the virus hit us. Upon my return to the border, I went to a Global Entry Kiosk and it took a photo of my face and printed out a slip that I gave to the immigration officer. I did not have to type in my name, or scan my passport, it just took my photo and knew who I was (I imagine comparing it to my passport photograph). Since I have had issues with the finicky scanners that the kiosks use to scan the passport, this was a major upgrade from my perspective. It could be that the universe of matches was small in that they knew I was coming as the airline would have given them a passenger list, or maybe they don’t need to limit it for the software to work.
You just hit on the big difference. Yes, it works because there is a very small population of expected hits that they are attempting to match against. When you try to apply the same technology to matches against hundreds of thousands of prospects, the math works the other way. It's called the Prosecutor's Fallacy.
99.5%, huh? Google if 99% were good enough....
here everything u wanna know about
"We found that 13 of 14 agencies that reported using non-federal systems do not have complete, up-to-date information on what non-federal systems are used by employees
What is a 'non-federal system'? State govt? Private sector?
I find facial recognition to be great fun if you want to mess with people knociking on your door to sell you something.
However I do not relish the idea of my right to travel being determined by a platform produced by the lowest bidder that's being operated by bureaucrats.
Facial recognition software says “this might be a match”. It’s up to humans to determine whether it is. “The software identified you” is never a justification for government or police action.