The Volokh Conspiracy
Mostly law professors | Sometimes contrarian | Often libertarian | Always independent
System Errors and Due Process
How can government agencies better safeguard procedural due process rights?
As Supreme Court Justice Felix Frankfurter put it: "The history of American freedom is, in no small measure, the history of procedure." The scale of threats to due process, however, has greatly expanded. And so have the opportunities to correct uses of wholly error-prone systems by government, in ways that harm our liberty and property rights.
In Minnesota, a recent federal lawsuit alleges that a private insurer illegally denied "elderly patients care owed to them under Medicare Advantage Plans" by deploying an AI model known to have a 90% error rate, to override determinations by physicians. The tool is proprietary, and the company denied requests from patients or physicians for more information; I am aware of no independent research evaluating its performance. If the allegations are correct, then the model, used in many states, could have resulted in errors in millions of cases. (Federal regulations, made effective in 2024, now forbid exclusive reliance on AI to make such determinations.)
Another significant source of error involves not an automated system operating alone, but the way human decision-makers rely on something automated. As NIST has explained, in the context of facial recognition technology, "what matters then is the human response," because the human police officer will review the candidate photos that the system pulls to decide whether any of them is a suspect. The system may score the images, or call the level of match "strong," using unclear criteria, and, as the National Academy of Sciences concluded in a landmark 2024 report, such interpretations "can prejudice or bias human review of images." The evidence on the interaction of these systems and the people that use them is limited, but what research has been done suggests that people can be very bad at recognizing a face from a series of images. Even in a setting when the task was to examine high-quality passport photos, participants made errors 50 percent of the time. Eyewitness misidentifications are a leading cause of wrongful convictions, and it is well understood that suggestive police lineups can alter the memory of an eyewitness. And we know that false arrests can and have resulted from the use of untested facial recognition systems. Except in a handful of states that have passed some regulations, the roll-out of facial recognition has occurred freely.
Another source of error may lie buried in the code used to program a software tool that the government relies upon. In another noteworthy example, in People v. Collins, a New York trial judge found the secrecy behind a government program used to interpret complex DNA mixtures, called the Forensic Statistical Tool (FST), to be highly problematic: "The fact that FST software is not open to the public, or to defense counsel, is the basis of a more general objection… [T]he FST is, as a result, truly a 'black box'—a program that cannot be used by defense experts with theories of the case different from the prosecution's." The FST was a "black box" computer program only because the government refused to disclose the underlying code. When a defendant made objections, however, a federal judge ordered the code to be disclosed. At that point, outside experts reviewed it and found serious flaws. And the New York Medical Examiner's Office stopped using the error-ridden software.
Our courts are only slowing coming to grips with the ways that modern technology can increase the chances of an error. In Herring v. United States, decided in 2009, the Supreme Court Justices examined this issue in the context of basic databases that local governments maintain to track criminal cases. A defendant, Herring, was arrested based on a warrant. This was an error; the warrant had actually been recalled, or canceled, several months earlier, but the government database had not been updated. The majority of the Justices decided that, because police acted in "good faith" when they arrested Herring, and the mistake in the database was not "systematic error," Herrings's conviction should not be reversed. Could the Justices be sure that these errors were not systematic? As Supreme Court Justice Ruth Bader Ginsburg put it in dissent, although databases "form the nervous system of contemporary criminal justice operations … [t]he risk of error stemming from these databases is not slim."
The best way to minimize serious errors is to try using basic quality controls to look for errors, cure them, and make improvements to the process. Even rudimentary error checks are often lacking for government processes that deeply matter to people's lives and their rights. Shortly after the Supreme Court's landmark procedural due process ruling in Goldberg v. Kelley was decided, Yale scholar Jerry Mashaw described the "management side of due process," and called on government agencies to adopt a "quality assurance system," since fair hearings would not be enough to assure accuracy in decision-making. Due process requires good management, Mashaw argued, and not just individual hearing rights.
More recently, David Ames, Cassandra Handan-Nader, Daniel E. Ho, and David Marcus, have conducted important work examining mass errors in mass adjudication by agencies, focusing on the Board of Veteran's Appeals. Due process has little meaning of errors are rampant, and if there is no sound system for detecting and correcting them. They argue that: "Goldberg's original premise of decisional accuracy requires a hybrid of external intervention, stakeholder oversight, and internal agency management." Instead, we have seen, as I discussed in a prior post, the deployment of new AI systems that raise the specter of still more rapid and large-scale errors by administrative agencies. This is important for government officials too. It harms the efficiency of their work if it is unfair and error prone, as David Super has explored; due process does not necessarily involve a trade-off between efficiency and individual rights.
Actual testing and error rates should be taken far more seriously as a central part of what it means for due process to protect people against erroneous deprivations of life, liberty, and property. In the courts, standards of review make it very difficult to correct errors on appeal; this is especially true in criminal cases, where I have studied the post-conviction litigation of people who were eventually exonerated by DNA testing. Judges do not normally act like systems administrators; instead, apart from their role in aggregate litigation, they handle cases one at a time. Yet, even large, well-resourced agencies that process vast numbers of cases do not have incentives to audit themselves. Our custodians of due process should be required to test for and correct serious errors, and absent that, we should be proactive in legislation, to impose new due process safeguards requiring government officials to use quality control systems to minimize errors.
Editor's Note: We invite comments and request that they be civil and on-topic. We do not moderate or assume any responsibility for comments, which are owned by the readers who post them. Comments do not represent the views of Reason.com or Reason Foundation. We reserve the right to delete any comment for any reason at any time. Comments may only be edited within 5 minutes of posting. Report abuses.
Please
to post comments
McDonald's Chicken Nuggets (https://mcdonaldsmenupricescanada.com/) prices vary by location and meal size, but on average, a 10-piece nuggets meal costs around $6.99 to $8.99 in the U.S. Prices may differ in other countries, with Canada having similar rates. For the latest prices, check McDonald's Menu Prices Canada or your local McDonald's menu.
Sniffies (https://sniffiesapp.pro/) is a location-based social networking platform designed for spontaneous connections, offering an interactive map interface for users to explore nearby meetups in real time. With a sleek and user-friendly design, Sniffies provides a seamless experience for discovering like-minded individuals while maintaining privacy and accessibility.
Script ML APK (https://scriptml.net/) is a powerful and user-friendly tool designed to simplify scripting and automation, making it accessible for both beginners and advanced users. With its intuitive interface and robust features, it enhances productivity by streamlining complex tasks effortlessly. Whether you're managing scripts for development or automating repetitive actions, Script ML APK offers a seamless experience, ensuring efficiency and ease of use.
GB WhatsApp is a powerful alternative to the official WhatsApp, offering enhanced customization, privacy features, and advanced messaging options. Users can enjoy unique features like theme personalization, hiding online status, auto-reply, and the ability to send larger files, making communication more flexible and convenient. With its user-friendly interface and additional functionalities, GB WhatsApp (https://gbappsplus.net/) provides a more personalized and enriched messaging experience, perfect for those who want extra control over their chats.
A Word Counter (https://countingtools.com/character-counter/) is a valuable tool for writers, students, and content creators, helping to track word and character counts effortlessly. Whether you're crafting an essay, social media post, or SEO-optimized content, this tool ensures you meet length requirements while maintaining clarity and conciseness. It’s perfect for improving readability, staying within limits, and enhancing overall writing efficiency. With a Word Counter, you can focus on quality while letting the tool handle the numbers!