The Volokh Conspiracy
Mostly law professors | Sometimes contrarian | Often libertarian | Always independent
Due Process and AI
How does AI challenge basic procedural due process protections and what should be done?
As we all now know, AI plays a now-pervasive role in our lives, and often without our knowledge. When an an AI system links a person's face to a still from surveillance video, recommends whether to detain a person in jail, or responds with "situational awareness" to a national security threat, what assurance is there that this system can be trusted to safely perform as promised? AI is being used throughout government in hundreds of settings, including those that affect people's core constitutional rights. In response, however, many judges, officials, and scholarly commenters have uncritically credited the claims made by the developers that these systems are reliable and have been subjected to rigorous testing. All too often, those assurances have not been borne out when independent researchers test the AI systems.
And AI has created due process challenges across the world. Just ask it. ChatGPT just told me this: "AI has created significant challenges to due process worldwide in various ways, particularly in criminal justice, government decision-making, and surveillance." And I agree.
AI is now relied on throughout government, even in high-impact settings, such as decisions to identify suspects using facial recognition, detain individuals, or terminate public benefits. Many more uses are being developed, ranging widely from using AI to predict hospital bed usage, to count endangered species like sea lions, and in border security. While some of these AI applications may be helpful and mundane, others may seriously harm people and impact their rights. Consider an example from one person's case.
In November 2019, a man entered a shop in West New York, a small New Jersey town near the Hudson, that offered international wire transfers, repaired cell phones and sold accessories. He asked an employee who was counting money at the counter about wiring funds to South America, and when she turned to look at her computer, he entered an open door behind her. She assumed that he was going to speak to a cell phone repair tech in the back room, but instead, the man surprised her from behind, seized the money she was counting—almost $9,000—pistol-whipped her head with a black handgun, and left. The employee described him to police who arrived shortly afterwards as a "Hispanic male wearing a black skully hat" and recalled he had actually briefly entered the store another time earlier that same day.
The store's surveillance camera had captured footage of both the robbery and the earlier visit. Local detectives pulled a still image from the footage, a "probe image," as they call it in biometrics, and uploaded it for analysis: they found no match in their New Jersey system. Next, they sent it to the Facial Identification Section of the New York City Police Department's Real Time Crime Center, where a detective using their AI system found Arteaga a "possible match." The local detectives then showed a photo array, with five innocent filler photos, and Arteaga's photo, to the store employee, who then identified him.
That AI system was a black box. The detectives did not know how it worked—and the court not know. It ran its analytics and ranked and selected candidate images. We know quite a bit more now about how such systems perform and where they fail. The defense lawyer in the case, completely in the dark except knowing that FRT was used, argued this violated due process.
In Spring 2024, a landmark National Academy of Sciences report called for a national program of "testing and evaluation" before such systems are deployed, given evidence that "accuracy varies widely across the industry." So far, no such program exists.
In their 2023 ruling in State v. Arteaga, appellate judges in New Jersey agreed with the trial judge that if the prosecutor planned to use facial recognition technology, or any testimony from the eyewitness who selected the defendant in a photo array, then they would have to provide the defense with information concerning the AI program used. Specifically, the prosecutor had to share: "the identity, design, specifications, and operation of the program or programs used for analysis, and the database or databases used for comparison," as all "are relevant to FRT's reliability." The New Jersey court emphasized, quoting the U.S. Supreme Court's ruling in Ake v. Oklahoma, that the "defendant will be deprived of due process" if he was denied "access to the raw materials integral to the building of an effective defense."
And yet, by the time the appeal was decided, Arteaga had remained in pre-trial detention for four years. Rather than remain in jail and pursue a trial, he pleaded guilty for time served. He explained to a journalist: "I'm like, do I want to roll the dice knowing that I have children out there? As a father, I see my children hurting."
Like most states, New Jersey does not regulate use of FRT or other types of AI by the government, although the state Attorney General has been soliciting input and assessing law enforcement use of FRT. And defense lawyers have raised concerns with compliance with the Arteaga decision, as they still are not routinely receiving discovery regarding use of FRT.
It is not just facial recognition; a wide range of government agencies deploy AI systems, including in courts, law enforcement, public benefits administration, and national security. If the government refused to disclose how or why it linked a person's face to a crime scene image, placed a person in jail bail, cut off public benefits, or denied immigration status, there should be substantial procedural due process concerns, as I detail in my book and in a forthcoming article. If the government delegates such tasks to an AI system, due process analysis should does not change.
Editor's Note: We invite comments and request that they be civil and on-topic. We do not moderate or assume any responsibility for comments, which are owned by the readers who post them. Comments do not represent the views of Reason.com or Reason Foundation. We reserve the right to delete any comment for any reason at any time. Comments may only be edited within 5 minutes of posting. Report abuses.
Please
to post comments
Interesting the police at least tried to do the right thing -- use the AI as a tool, then do manual cofirmation using a court-acceptable procedure, the lineup. Maybe this is inadequate and needs to be reviewed for court approval, the way fingerprints and DNA were laboriously done in a few cases before general acceptance.
Also, wondering what's under the hood in AI, so you can mount a proper defense, like any other part of the investigation chain is good, but nobody demands to know what's "under the hood" in the neurons of real brains making accusations. It's my understanding this even occupies a special place in law, as "direct evidence", as opposed to circumstantial like DNA.
Well, they do wonder what's going on in brains all the time of course, but there's no scientific method. (And probably never should be! Talk about abusive tools of tyrants that should never be built. I can even write rhetoric for tyrants of the future. "Brain scans to read minds aren't testifying against onself. It's physical evidence!" the lawyer stated, his shifty eyes at full speed.
"nobody demands to know what's "under the hood" in the neurons of real brains making accusations"
Are you kidding? Establishing / impeaching credibility is, like, the most important part of trial practice. What do you think lawyers are doing in front of juries?
"what assurance is there that this system can be trusted to safely perform as promised?"
Who says that we need a standard of assurance different than our standard for humans? Inherent in our speech is the dicta "people make mistakes."
Applying that to facial recognition, the proper policy is to verify the
machine results before taking action. If verification does not occur, that is the real error, not the machine's result.
So, I view this whole post as a strawman.
Krayt and Tuttle base comments on the same unreliable analogy. They suppose that the situation is identical if a presumptive-probable cause subject gets selected for a lineup by either method—traditional police work, or AI facial recognition. That is not even close to being the case.
Traditional police work vastly narrows pre-lineup possibilities in ways which exclude nearly the entire population. Absent police corruption, only a few individuals with history featuring some evidence of connection to the time, place and circumstances of the crime get through that initial sieve. Whether corruption or some other untoward activity figured in the initial selection is a matter accessible to defense investigation, and courtroom cross-examination.
Nothing of that defendant-protective predicate is present with AI facial recognition. It makes the entire population subject to initial dragnet suspicion, with multi-axis physical happenstance the presumptive basis to narrow the sample. Cheek bones, eye placement, ear shapes, etc., provide one class of variable. Light characteristics of the images under study always vary. The variances deliver artifacts and interpretive stumbling blocks without limits. Those constraints apply alike to both crime scene imaging, and to the images sampled in the checked database.
The question whether digital photographic enhancements play any role in the process looms large. The question also features a consideration whether one unique suspect becomes the product of the analysis, or whether some unknown collection of others were also delivered, and then reduced by unspecified methods to the one unfortunate exposed to criminal jeopardy by placement in a lineup.
With so many axes of uncertainty inherent in the method, it is little wonder that advocates for reliance on AI facial recognition prefer to keep the specifics of each analysis out of sight.
Take a less controversial piece of tech - a speed camera.
If the defence isn't allowed access to the technicals of the camera, calibration, etc....?
It’s not a new question.
See e.g. Frank Riley, The Cyber and Justice Holmes, Worlds of If Science Fiction, 1955.
https://www.gutenberg.org/cache/epub/59148/pg59148-images.html
Of course today, it’s no longer such an if.
I think you’re all getting caught up in the hype about AI, and are thus losing sight that AI is simply (yes, simply) a different way of having a computer produce a desired output (in the above example, pictures and names of potential suspects). Nothing an AI model produces is something that an algorithm written by a human couldn’t also have produced. After all, there have been (human generated) programs for facial recognition, fingerprint matching and so on long before AI became a thing.
So rather than focus on the how a particular piece of output was produced, I suggest focusing on the output itself. Is it proper to access a database of photos to identify potential suspects? Is it proper to access a database to identify possible DNA matches? If yes, then how the picture or DNA match is tagged is irrelevant.
For reasons which escape me entirely, questions like these always call forth a cohort of folks with peculiar priorities. They seem to prioritize getting someone convicted (or in the case of capital punishment, killed), instead of prioritizing taking the care necessary to be sure it is the right person.
I think that played a big role in ending public executions. The crowds too often turned out to be cautionary examples to illustrate the contents of the jury pool.
In England one reason for ending public executions was the prevalence of pickpockets - notwithstanding that the pickpockets were themselves often guilty of a felony, which in times past was capital.
This is not an AI issue at all, the police treated a 'tip', the fuzzy video, along with a software 'match' as probable cause. An AI is not an eyewitness.
If we were talking about due process and AI, I think there are far more potential benefits then harms, certainly in criminal justice, government decision-making, and surveillance. LLMs will know and infer from every page of law written, a boon for lawyers and defendants alike. AI adaptability and scalability fundamentally changes surveillance, of video feeds, government transactions.
But of course, back to the original post. It all depends on how our government actors in power use AI results.
LLMs will know and infer from every page of law written, a boon for lawyers and defendants alike.
LLMs using present methods will, "know," next to nothing—only some statistical patterns in the way words have been used historically. LLMs will infer accurately nothing at all. Some of what they deliver will be readily mistakable for accurate inference. That is not a good thing. If all you have to look at in the way of inference are hallucinations, hallucinations which seem plausibly accurate will prove more deceptive than the others.
What that might imply for differently engineered applications which also operate under the AI rubric—image analyzers, and image generators, for instance—ought to be considered separately.