The Volokh Conspiracy

Mostly law professors | Sometimes contrarian | Often libertarian | Always independent

Due Process

Due Process and AI

How does AI challenge basic procedural due process protections and what should be done?

|

As we all now know, AI plays a now-pervasive role in our lives, and often without our knowledge. When an an AI system links a person's face to a still from surveillance video, recommends whether to detain a person in jail, or responds with "situational awareness" to a national security threat, what assurance is there that this system can be trusted to safely perform as promised? AI is being used throughout government in hundreds of settings, including those that affect people's core constitutional rights. In response, however, many judges, officials, and scholarly commenters have uncritically credited the claims made by the developers that these systems are reliable and have been subjected to rigorous testing. All too often, those assurances have not been borne out when independent researchers test the AI systems.

And AI has created due process challenges across the world. Just ask it. ChatGPT just told me this: "AI has created significant challenges to due process worldwide in various ways, particularly in criminal justice, government decision-making, and surveillance." And I agree.

AI is now relied on throughout government, even in high-impact settings, such as decisions to identify suspects using facial recognition, detain individuals, or terminate public benefits. Many more uses are being developed, ranging widely from using AI to predict hospital bed usage, to count endangered species like sea lions, and in border security. While some of these AI applications may be helpful and mundane, others may seriously harm people and impact their rights. Consider an example from one person's case.

In November 2019, a man entered a shop in West New York, a small New Jersey town near the Hudson, that offered international wire transfers, repaired cell phones and sold accessories. He asked an employee who was counting money at the counter about wiring funds to South America, and when she turned to look at her computer, he entered an open door behind her. She assumed that he was going to speak to a cell phone repair tech in the back room, but instead, the man surprised her from behind, seized the money she was counting—almost $9,000—pistol-whipped her head with a black handgun, and left.  The employee described him to police who arrived shortly afterwards as a "Hispanic male wearing a black skully hat" and recalled he had actually briefly entered the store another time earlier that same day.

The store's surveillance camera had captured footage of both the robbery and the earlier visit. Local detectives pulled a still image from the footage, a "probe image," as they call it in biometrics, and uploaded it for analysis: they found no match in their New Jersey system.  Next, they sent it to the Facial Identification Section of the New York City Police Department's Real Time Crime Center, where a detective using their AI system found Arteaga a "possible match."  The local detectives then showed a photo array, with five innocent filler photos, and Arteaga's photo, to the store employee, who then identified him.

That AI system was a black box. The detectives did not know how it worked—and the court not know. It ran its analytics and ranked and selected candidate images. We know quite a bit more now about how such systems perform and where they fail. The defense lawyer in the case, completely in the dark except knowing that FRT was used, argued this violated due process.

In Spring 2024, a landmark National Academy of Sciences report called for a national program of "testing and evaluation" before such systems are deployed, given evidence that "accuracy varies widely across the industry." So far, no such program exists.

In their 2023 ruling in State v. Arteaga, appellate judges in New Jersey agreed with the trial judge that if the prosecutor planned to use facial recognition technology, or any testimony from the eyewitness who selected the defendant in a photo array, then they would have to provide the defense with information concerning the AI program used. Specifically, the prosecutor had to share: "the identity, design, specifications, and operation of the program or programs used for analysis, and the database or databases used for comparison," as all "are relevant to FRT's reliability." The New Jersey court emphasized, quoting the U.S. Supreme Court's ruling in Ake v. Oklahoma, that the "defendant will be deprived of due process" if he was denied "access to the raw materials integral to the building of an effective defense."

And yet, by the time the appeal was decided, Arteaga had remained in pre-trial detention for four years. Rather than remain in jail and pursue a trial, he pleaded guilty for time served. He explained to a journalist: "I'm like, do I want to roll the dice knowing that I have children out there? As a father, I see my children hurting."

Like most states, New Jersey does not regulate use of FRT or other types of AI by the government, although the state Attorney General has been soliciting input and assessing law enforcement use of FRT. And defense lawyers have raised concerns with compliance with the Arteaga decision, as they still are not routinely receiving discovery regarding use of FRT.

It is not just facial recognition; a wide range of government agencies deploy AI systems, including in courts, law enforcement, public benefits administration, and national security. If the government refused to disclose how or why it linked a person's face to a crime scene image, placed a person in jail bail, cut off public benefits, or denied immigration status, there should be substantial procedural due process concerns, as I detail in my book and in a forthcoming article. If the government delegates such tasks to an AI system, due process analysis should does not change.