The Volokh Conspiracy
Mostly law professors | Sometimes contrarian | Often libertarian | Always independent
An Algorithm for Predicting Recidivism Isn't a Product for Products Liability Purposes
"'[I]nformation, guidance, ideas, and recommendations' are not 'product[s]' under the Third Restatement, both as a definitional matter and because extending strict liability to the distribution of ideas would raise serious First Amendment concerns."
From Friday's Third Circuit opinion by Judge Cheryl Krause, joined by Chief Judge D. Brooks Smith and Judge Thomas Hardiman, in Rodgers v. Christie:
June Rodgers's son was tragically murdered, allegedly by a man who days before had been granted pretrial release by a New Jersey state court. She brought products liability claims against the foundation responsible for the Public Safety Assessment (PSA), a multifactor risk estimation model that forms part of the state's pretrial release system….
The NJPLA [New Jersey Products Liability Act] imposes strict liability on manufacturers or sellers of certain defective "product[s]." But the Act does not define that term…. [B]oth parties agree the Third Restatement [of Torts] definition [of "product"] is the appropriate one. We therefore assume that to give rise to an NJPLA action, the "product" at issue must fall within section 19 of the Third Restatement.
The PSA does not fit within that definition for two reasons. First, as the District Court concluded, it is not distributed commercially. Rather, it was designed as an "objective, standardized, and … empirical" "risk assessment instrument" to be used by pretrial services programs like New Jersey's….
Second, the PSA is neither "tangible personal property" nor remotely "analogous to" it. As Rodgers' complaint recognizes, it is an "algorithm" or "formula" using various factors to estimate a defendant's risk of absconding or endangering the community…. "[I]nformation, guidance, ideas, and recommendations" are not "product[s]" under the Third Restatement, both as a definitional matter and because extending strict liability to the distribution of ideas would raise serious First Amendment concerns.
Rodgers's only response is that the PSA's defects "undermine[ ]" New Jersey's pretrial release system, making it "not reasonably fit, suitable or safe" for its intended use. But the NJPLA applies only to defective products, not to anything that causes harm or fails to achieve its purpose….
Editor's Note: We invite comments and request that they be civil and on-topic. We do not moderate or assume any responsibility for comments, which are owned by the readers who post them. Comments do not represent the views of Reason.com or Reason Foundation. We reserve the right to delete any comment for any reason at any time. Comments may only be edited within 5 minutes of posting. Report abuses.
Please
to post comments
Calling the output of a computer program an, "idea," seems at least as peculiar as calling the program itself a, "product."
They were talking about the PSA, not its outputs. The PSA implements an algorithm, which is (an embodiment of) an idea.
Under patent law such a program would be an abstract idea, which can be executed without a computer, using pencil and paper. In fact you may have seen the movie about NASA human "computers " who calculated the original manned space missions. Simply doing the calculations with an electronic device does not make the process any less of an idea.
rsteinmetz — Do you suppose orbital calculations are patentable inventions? I am no patent lawyer, but that would surprise me.
Not without more.
At one point in my career, I worked for a large electronics company that had a couple Russian development centers (PhDs from the Soviet/Russian academy of science were cheap then). They would send me equations, and I would have to tell them that they needed to find a use for the equation before it was patentable.
In recent years, after the Supreme Court Alice decision, even that wasn’t enough for patent ability.
Perhaps just as much to the point, a product that is only supposed to produce a statistical result can't be proven defective by one failure anyway.
I agree. This focus on "product" is likely to just get everything reclassified as "products". The court would have done better by resolving the case on the merits - where, as you say, the plaintiff still would have lost.
As with all new ai type stuff, it is a tool and outputs should be reviewed by humans before you send in the swat, guns blazing.
And any such in this case is a statistical argument, so the only provably safe choice is no early release or bail.
But the opinion still includes "...because extending strict liability to the distribution of ideas would raise serious First Amendment concerns."
This sounds like deliberate use of language to signal that even if the court couldn't reject on the basis of the definition of a "product", they would still have a substantial burden upholding over any 1A claims.
True, though I do think it ought to be possible to challenge the algorithm on the grounds that it is badly designed.
That would require looking at the internal workings, and having the usual clown show of competing experts, real and imaginary.
Yeah - this seems a no-brainer to me. Algorithms are products like any tangible one.
I agree, a computer program is a product. People sell enough of them.
The thing is, the software here makes no claim to be perfectly predicting the future, nor is such a prediction possible. It's just making a formalized, numerical "best guess". Humans being what we are, even the best possible guess is going to be wrong in numerous individual instances.
It would take a LOT of work to establish this software was defective, because it's going to make mistakes even if it was perfect.
We have well-developed product liability tort law that already addresses the exact issue of fitness for purpose.
We shouldn't shut the courthouse door on examining the issue.
The basic problem though is that being merely defective wouldn’t normally be actionable here, because there is no privity, and the actual party utilizing the software has sovereign immunity. Which essentially means that tort law is required, and several requirements for negligence would likely not be met. For example, the creators of the software may owe a duty to the sovereign that is their customer, but does that extend to 3rd party who may be affected? Is that liability reasonably foreseeable? My thoughts that this injured party was not remotely reasonably foreseeable. That is likely why plaintiffs tried to avail themselves of products liability law.
If the algorithm was liable, then so could the training of state employees, and so could the people who taught those employees to read and write, because they used those skills to make a decision.
I agree with the court that strict liability would be wrong to apply here.
I think a computer program to do this is indeed a product, not an idea. Otherwise software which has long been considered subject to liability would be considered immune. This software isn’t inherently different from other software. All technology implements ideas.
A policy-based issue, and one think might have been the real unspoken basis for decision, but which I think should be discussed explicitly, is that predicting people’s future behavior has similarities to predicting the weather, or at least predicting the weather in the field’s infancy in the 19th century. There is no known way to do it reliably. Error is inevitable. So imposing liability for error would simply stop people from attempting to develop the field.
"Error is inevitable."
Would it be better to say "Risk is inevitable?"
Maybe the software said this person had a low chance (say 10%)of re-offending.
There wouldn't be an "error" and really the only debatable aspect would be how much "risk" policy makers were willing to accept.
You are assuming the algorithm correctly estimates the risk. No such assumption is warranted.
Aren't both true? Error is inevitable in risk assessment, but even if the risk assessment is accurate, that (likely) does not mean there is a zero-chance of an event happening.
Your comment is consistent with my reply. In order for both to be true, there must be (at least) two debatable aspects, not “only” one.
Fair enough.
Error is inevitable. So imposing liability for error would simply stop people from attempting to develop the field.
Not really. It can be developed before being relied on.
It's foolish to put untested software into use for this, or any other, purpose. Shouldn't the courts continue to make their own decisions, while running the software for comparison purposes, for a good long time before switching over?
you assume people making their own decisions is reliable. People aren’t very good at predicting other people’s future dangerousness. So if what people think is different from what the software says, that proves nothing. There is no standard out there that the software could be compared to. All you can do is see if people actually do what is predicted. And of course, to see that, you have to give them bail when the software says to. That’s the only way it can be tested. How can you tell what they would have done if given bail when they’re still in jail?
I’ve said before, our situation is like Frodo’s. We are naked in the dark. There is no nothing, no veil, between us and the Wheel of Fire. We are in fact much more ignorant and helpless than we think we are.
In the real world, if you sell it, it is a product.
In honor of Justice Holmes’ birthday. And topical.
http://www.gutenberg.org/files/59148/59148-h/59148-h.htm
This whole line of stories on efforts to fight bail reform is just wrong headed.
The function of bail is to ensure that the defendant shows up for trial. Nothing more, nothing less. Success or failure of bail reform should be judged on that basis. Do defendants show up for trial, or did more defendants abscond?
The problem is that all of these articles seem to be completely accepting of the premise of the bail reform opponents that bail reform should judged on the basis of crime prevention.
Doesn't b.Bail more generally serve to remind the bailee to do nothing which would jeopardize getting back the bail, including committing further crimes?
Ignorant question: is bail forfeited by committing further crimes? Suppose one is arrested and charges dropped; is bail forfeited? Suppose one is arrested and acquitted; is bail forfeited? Suppose one is arrested and convicted; is bail forfeited? (Assuming one shows up for the bailed offense and does nto otherwise violate terms of bail.)
"Doesn’t b.Bail more generally serve to remind the bailee to do nothing which would jeopardize getting back the bail, including committing further crimes?"
I'm not a lawyer, especially not a criminal lawyer, but my understanding is that the only criteria for a bailee to get the money back is showing up for trial on the case for which he / she was granted bail.
Hypothetical: suppose the program is accidentally defective; the algorithm says to multiply risk factor A by 0.8 and the program mistakenly multiplies by 0.08, thus releasing too many high-risk people. Would that be complainable?
Or suppose it is intentionally defective and assigns abnormally high or low risk for specific names, or if specific parameters are entered (criminal code 3.1415926535 gets especially high risk; criminal code 2.71828 gets especially low risk). Would that be complainable?
"June Rodgers's son was tragically murdered, allegedly by a man who days before had been granted pretrial release by a New Jersey state court. "
Poor dear. She thought the system was supposed to protect people, not criminals and the government.
Now she knows better.
Ah Bob - never seems to know what allegedly means. Or due process.
And yet loves to post on legal blogs.
NJ made a purely political reckless policy change and her son died.
What due process did he get?
Due process...for a policy change?
Quit it with your lame righteous rabble-rouser routine. Your emotional appeal is worse than useless.
Individual liberty, except for crime and terrorism. Then you don't much care for rights or procedures to make sure people are actually guilty.
Its been 3 years, there must be a verdict but the court couldn't even be bothered to check. So "allegedly" the killer.
Felon let out to murder because of a politically motivated bail "reform" and you don't care. Your compassion is strangely misplaced.
Her son would be alive except for the reckless behavior of the government.
All sorts of people would be alive if the government locked up everybody.
All sorts of people would be alive if they didn't drive, shoot selfies too close to cliff edges, go outside their front door, or stay inside.
Eating has a risk. So does not eating, or eating poorly. Breathing can kill you. Water is a known hazard under the right conditions.
Life is full of tradeoffs. You can't protect against every known failure mode without creating new failure modes. How many innocent people do you wnat to lock up just in case the government got it right. You know, the government which has corrupt cops, prosecutors, and judges every once in a while? The government which panders to voters like you who are scared to death of life and all its risks?
"How many innocent people do you wnat to lock up just in case the government got it right."
Bob's answer likely would depend on what they look like, and perhaps on how they think.
Your emotional appeals against reform due to some crime anecdote or other are the misplaced compassion.
Actually, I'm pretty sure they're not compassion.
"Reform" implies improvement. This was just "change" driven by left wing political concerns and defense lawyers.
The "unreformed" system was better.
You are repeating that assertion and not answering questions.
Ok...serious question, Sarcastr0. Why can't Ms. Rodger's sue the person whose decision implemented the use of this PSA algorithm? Seems to me a very bad call was made.
Supposed that under the old system, judges using their best common-sense had an error rate of 2.4%. We all agree (I think) that if the error rate under this new system is 2.3%, then it is objectively "better." Right? Or, at least, we'd all agree on this if the new error rate was lower on serious crimes.
If we know that 24 out of 1000 bailees committed serious offenses under the old system, but now only 10, or 15, or 23--under this new algorithm--are committed serious crimes while out on bail . . . then this is a success. Obviously, OF COURSE, not to the victims of these 10 or 15 or 23 serious crimes and their families. But society writ large is benefiting.
It might be that the algorithm is off, and actually more bailees now are engaging in bad acts. That's why I'm sorry this case was cut off . . . I'd like to know the actual statistics and the justice system should be informed by those actual statistics.
(Of course, there are other confounding variables at play. Let's say that City X saves $5,000,000 a year with this new lower-bail policy, due to not paying for pretrial incarceration. And the city uses these savings to hire 250 more street cops. And, not surprisingly, many types of crimes are reduced. If so, then it might be that the low-bail system is a success, Even If, say, bailees commit 26 serious crimes per 1,000 released, as opposed to the 24 under the old system.)
That’s why I’m sorry this case was cut off . . . I’d like to know the actual statistics and the justice system should be informed by those actual statistics.
Concur. And thanks for the detailed response. 🙂
"Supposed that under the old system, judges using their best common-sense had an error rate of 2.4%. We all agree (I think) that if the error rate under this new system is 2.3%, then it is objectively “better.” Right? Or, at least, we’d all agree on this if the new error rate was lower on serious crimes."
No. It's more complicated than that.
There are two types of errors, (1) letting people out who are a threat to society and (2) locking people up who are not (and the error rate for this is very difficult to compute). If reducing the error rate for (1) involves increasing the error rate for (2), the algorithm might not be successful. Suppose the algorithm only gave bail to people who commit white collar crimes (or whatever category has been shown least likely to offend while awaiting trial). I suspect that would reduce the incidents of serious crimes being committed by someone out on bail, but that's simply because it is so overinclusive.
On the other hand, it's also possible that the number of additional people released could be enough to overwhelm the increase in crime, so that even though the total amount of serious crimes committed by people out on bail increases the rate decreases. It's not obvious that society should favor an increase in the total amount of crime committed by people out on bail just because the rate is reduced.
All policies have costs.
allowing those upon whom costs fall to sue would render all policymakers legally exposed.
Plus it puts courts in the business of evaluating what’s good policy and what’s negligent policy.
Talk about judicial supremacy!!
Sarcastr0, I get what you are saying but I am troubled by the implications here. This has the same 'feel' as those awful qualified immunity cases we read about here on VC.
I most definitely do not want judicial supremacy issues. 😉
Criminals are people.
Some are.
I meant innocent people as you well know.
And just when do you decide if they are innocent? Before or after the trial?
There was an episode of Deep Space Nine, called "The Tribunal" where O'Brien was captured and placed on trial by the Cardassians. The Cardassian judicial system is premised on the idea that everyone brought before the court is guilty. Not presumed guilty with a chance, however slight, to prove their innocence, but just straight up guilty and already sentenced. The defendant had lawyers whose job it was to convince the defendant not only to acknowledge guilt, but to concede to the wisdom of the state's prosecution.
A friend of mine suggested that such a system was impossible. But given how our criminal justice system functions in practice, with routine guilty pleas and in court apologies, and how many people assume an arrest is guilt (unless it's someone they happen to like), we're closer to the Cardassian version of justice than we should be.
The model for Title IX, no doubt.
No. All are. Once you decide that some are not, that empowers you or society at large to implement all manners of cruelty onto them.
As with all new ai type stuff, it is a tool and outputs should be reviewed by humans before you send in the swat, guns blazing.
Seems to me that, apart from the other issues this case had, you have one of causation. The program didn't release the detainee, it just made a statistical prediction which the authorities then relied upon, in part, to decide he qualified for release. Is the program, even assuming it is defective, really the cause (both but-for cause and proximate cause) of the subsequent murder by the guy who was released?
I'd say it would depend on the amount of error (if indeed the software was defective).
If the program spit out that he's a 10% risk but it should have shown he actually was a 90% risk, then - well how would the reviewers know the software was defective?
Programming Rule Number One:
"Code can't fix stupid."
The state is stupid for using an algorithm.
The Mother is stupid for thinking bail is related to pre-trial guilt.
We are all stupid for discussing the 'thought processes' of New Jersey.
How hard is it to just kill them?
I would think tracking down absconders is also terribly expensive.