Facial Recognition

Wrongful Arrest in Detroit Demonstrates Why Police Use of Facial Surveillance Technology Must Be Banned

The good news is that Boston has just barred law enforcement from using facial recognition technology.

|

Detroit police wrongfully arrested Robert Williams and jailed him for 30 hours, largely on the basis of a facial recognition match to a grainy video of a jewelry store robber. When the police got around to questioning Williams, 18 hours after his arrest, they showed him two photos of the robber and asked if they were pictures of him. At that point, The New York Times reports, Williams held one of the images beside his face and said, "You think all black men look alike?" The officers recognized immediately that there was no resemblance to the suspect in the video. All the same, Williams was not released until hours later and only after paying a $1,000 personal bond.

After Williams held up the photo, one of the officers made a crucial mistake: He said "the computer must have gotten it wrong." Police and prosecutors generally do not want to reveal that they have been using facial recognition technology to identify and arrest suspects. They especially don't want defendants and their lawyers to question the accuracy of the technology in court later. They know that many studies have shown serious problems with the technology's accuracy, particularly with respect to identifying individuals who belong to racial or ethnic minorities.

In Williams' case, the Detroit police sent an image from the jewelry shop's surveillance camera to the Michigan State Police. That agency, using DataWorks Plus facial recognition technology, ran the image through the state's driver's license database, which spit out Williams' photo as a suspect. Williams' photo was sent to the Detroit police as an "investigative lead report," which stated it was "not a positive identification" and "is not probable cause for arrest." Then, amazingly, the Detroit police showed a six-pack photo lineup containing Williams' driver's license photo to the shop's security consultant. That consultant wasn't there for the robbery; she'd only seen the same blurry surveillance video as the police. She nevertheless identified Williams as the culprit, and Williams was arrested.

Two weeks after his arrest, the prosecutor moved to dismiss the case—but "without prejudice," which means that Williams could later be charged with the crime.

Williams eventually got in contact with the Michigan branch of the American Civil Liberties Union (ACLU), which is now lodging a complaint on his behalf against the police department. The ACLU obtained a court order requiring the prosecutor's office to turn over Williams' complete file, including the warrant request and any body camera or dashboard camera footage of his arrest. As of this week, the prosecutors are still stonewalling and defying the court order by refusing to provide the required documents and/or claiming that they could not be located. The ACLU is asking for an absolute dismissal of the case, an apology, and the removal of Williams' information from Detroit's criminal databases.

After The New York Times' story about Williams appeared, the prosecutor's office issued a statement noting that "this case should not have been issued based on the [Detroit Police Department] investigation, and for that we apologize. Thankfully, it was dismissed on our office's own motion. This does not in any way make up for the hours that Mr. Williams spent in jail." While Williams' case has been expunged from the record, the prosecutor says that a legal technicality prevents it from being dismissed with prejudice.

The Williams case is unlikely to be the only false arrest based on faulty facial recognition technology. "The sheer scope of police face recognition use in this country means that others have almost certainly been—and will continue to be—misidentified, if not arrested and charged for crimes they didn't commit," writes Clare Garvie, a senior associate with Georgetown Law's Center on Privacy & Technology.

The good news is that more and more cities around the country are stepping in to stop the police use of facial recognition technology. Yesterday, Boston's city council adopted an ordinance making it unlawful for any Boston official to "obtain, retain, possess, access, or use any face surveillance system or information derived from a face surveillance system." It also bars Boston from entering into contracts with any third parties for such facial recognition services. And Boston police may not request facial surveillance information from outside agencies like the FBI, though they may use such info if such an agency provides it.

Earlier this month, tech giants Amazon and Microsoft declared temporary moratoria on selling their facial recognition services to law enforcement. That's not nearly enough.

Given the growing prevalence of surveillance, Harvard cybersecurity expert Bruce Schneier argued in a recent New York Times column that we "need to have a serious conversation about…how much we as a society want to be spied on by governments and corporations—and what sorts of influence we want them to have over our lives." If we want to prevent a dystopian future of automated authoritarianism, strict limits on the government's use of facial recognition technology is a good place to start.