No Facial Recognition Tech for Cops
A sloppy panopticon is almost as dangerous as an effective one.
The Los Angeles Police Department (LAPD) banned the use of commercial facial recognition apps in November after BuzzFeed News reported that more than 25 LAPD employees had performed nearly 475 searches using controversial technology developed by the company Clearview AI. That's one of several recent developments related to growing public concern about police surveillance using facial recognition.
Clearview AI's app relies on billions of photos scraped from Facebook and other social media platforms. The app, like other facial recognition technologies, pairs that database with machine learning software to teach an algorithm how to match a face to the photos the company has collected.
Clearview is just one player in an expanding market. The Minneapolis Star Tribune reported in December that the Hennepin County Sheriff's Office had coordinated 1,000 searches through its Cognitec facial recognition software since 2018.
Concerns about such technologies have led several legislative bodies to delay, restrict, or halt their use by law enforcement agencies. In December, the Massachusetts legislature approved the first state ban on police use of facial recognition tech. During nationwide protests over police abuse last summer, the New York City Council passed the Public Oversight of Surveillance Technology Act, which requires the New York Police Department to disclose all of the surveillance technology it uses on the public.
This technology is often deployed without public knowledge or debate, sometimes before the kinks have been worked out. An independent audit found that London's A.I. technology for scanning surveillance footage labeled suspects accurately only 19 percent of the time.
Studies by researchers at the Massachusetts Institute of Technology and the National Institute of Standards and Technology have found that many of these algorithms have especially high error rates when trying to match nonwhite faces. A December 2019 study of 189 software algorithms by the latter group found that they falsely identified African-American and Asian faces 10–100 times more often than white faces.
Michigan resident Robert Julian-Borchak Williams, who is black, is the first American known to have been wrongly arrested and charged with a crime because of a faulty face match. In January, Williams spent 30 hours in police custody and had to pay a $1,000 bond after an algorithm incorrectly matched him to a shoplifting suspect.
While the potential benefits of reliable facial recognition technology shouldn't be dismissed out of hand, a sloppy panopticon is almost as dangerous as an effective one. Privacy and accuracy concerns demand intense scrutiny from the public and transparency from the government regarding how this emerging technology is used.