Federal Agencies Can't Resist Using and Abusing Facial Recognition
Some agencies don't even know ways their employees are using facial recognition.
Already used to identify international travelers as they enter and exit the United States, facial recognition technology is becoming increasingly reliable and may soon become a regular feature of the travel process, say government researchers. But almost simultaneously come reports of haphazard implementation by federal agencies and real-life horror stories based on flaws in the technology. It all adds up to a cautionary tale about the uses and abuses of surveillance systems that can identify and misidentify members of the public as they go about their lives.
"The most accurate face recognition algorithms have demonstrated the capability to confirm airline passenger identities while making very few errors," the National Institute of Standards and Technology (NIST) announced this week. "The results indicate that several of the FR algorithms NIST tested could perform the task using a single scan of a passenger's face with 99.5% accuracy or better — especially if the database contains several images of the passenger."
One-to-one facial recognition systems, such as those you get while standing still so your image can be matched against a database, have been in-place for international travelers for several years, administered by Customs and Border Protection. NIST's study went beyond such checkpoint applications to investigate limited one-to-many uses, such as when crowds waiting to board their flights are scanned for the faces of those expected to be there to identify anybody who doesn't belong.
"NIST also looked at whether the technology could be viable elsewhere in the airport, specifically in the security line where perhaps 100 times more people might be expected during a certain time window," the report added.
While the NIST study suggests a remarkable degree of reliability, congressional testimony the same day the study was released illustrate the dangers when the technology goes wrong.
"[A]s I pulled up to my house, a Detroit police squad car was waiting for me," Robert Williams this week told the House Judiciary Committee's Subcommittee on Crime, Terrorism, and Homeland Security about his January 2020 arrest. "When I pulled into the driveway, the squad car swooped in from behind to block my SUV — as if I would make a run for it."
"I eventually got more information after the American Civil Liberties Union of Michigan connected me with a defense attorney," he added. "Someone had stolen watches, and the store owner provided surveillance footage to the Detroit Police Department. They sent a blurry, shadowy image from that footage to the Michigan State Police, who then ran it through their facial-recognition system. That system incorrectly matched a photograph of me pulled from an old driver's license picture with the surveillance image."
Which is to say, near-perfect reliability in laboratory tests sounds great, unless you spend 30 hours in jail because you're falsely tagged as a criminal by facial recognition technology. Real-world applications are messier than tests, and carry consequences in terms of liberty and, potentially, life when authorities respond to bogus alerts.
That's of concern given a recent Government Accountability Office (GAO) report that, of 42 surveyed federal agencies, 20 use facial recognition technology, including a hodgepodge of government-owned and privately contracted systems. Those 20 agencies have few safeguards in place, and most don't even know which facial recognition systems are being accessed, and for what purpose.
"We found that 13 of 14 agencies that reported using non-federal systems do not have complete, up-to-date information on what non-federal systems are used by employees," the report notes. Officials from one of the agencies told the GAO "that its employees did not use non-federal systems; however, after conducting a poll, the agency learned that its employees had used a non-federal system to conduct more than 1,000 facial recognition searches."
The implications and potential abuses are enormous since facial recognition technology is often used in a law-enforcement capacity that can result in the deployment of government force against the public—sometimes involving politically fraught concerns.
"Six agencies reported using the technology on images of the unrest, riots, or protests following the death of George Floyd in May 2020," the GAO reported. "Three agencies reported using it on images of the events at the U.S. Capitol on January 6, 2021. Agencies said the searches used images of suspected criminal activity."
It's frightening enough to have bad identity matches based on blurred photos in the hands of cops looking for shoplifting suspects and busting Robert Williams instead. But imagine them in the hands of federal agents who think they're defending the republic from one flavor or another of insurrectionists and under enormous pressure to apprehend enemies of the state.
The GAO recommends that the 13 agencies it found to have inadequate safeguards for facial recognition "implement a mechanism to track what non-federal systems are used by employees, and assess the risks of using these systems." Under the circumstances, that sounds like a barely adequate starting point for at least getting a handle on what the federal government is already doing in terms of using the privacy-threatening technology. Reining-in that use to minimize abuses would be the next step.
Going further, the Facial Recognition and Biometric Technology Moratorium Act of 2021, sponsored by Sen. Ed Markey (D-Mass.) and four co-sponsors would "prohibit biometric surveillance by the Federal Government without explicit statutory authorization." It would also withhold some federal money from states and localities that use the technology. Such legislation has languished in the past, but this week's congressional hearing, with concerns raised by members of both parties, demonstrates wider support for efforts to control government implementation of facial recognition and related biometric technologies.
But governments really like surveillance capabilities, and facial recognition is becoming more reliable and easier to deploy. It's difficult to imagine government agencies long denying themselves access to the ability to scan crowds and identify, however reliably, participants. If you're looking for protection against the spying eyes of the state, you're best advised to rely less on law than on sunglasses and a hat.
Show Comments (25)