No Product Liability for Risk Assessment Tool Used in Deciding Whether to Release Arrestees Before Trial

So a federal district court held Tuesday.

|The Volokh Conspiracy |

From Judge Joseph Rodriguez, writing in Rodgers v. Laura & John Arnold Found. (D.N.J. June 11, 2019):

New Jersey's Criminal Justice Reform Act … moved pretrial release decisions away from a resource-based model heavily reliant on monetary bail to a risk-based model. Consistent with [a] constitutional amendment [passed by the voters], the statute expressly requires courts, when making pretrial release decisions, to impose pretrial conditions that will reasonably assure: (1) the defendant's appearance in court when required, (2) the protection of the safety of any person or community, and (3) that the defendant will not obstruct or attempt to obstruct the criminal justice process. The CJRA provides a hierarchy of pretrial release conditions and requires courts to utilize the least restrictive options necessary to achieve the three goals noted above. The major difference between the new system and the old system is that judges must first consider the use of non-monetary pretrial release conditions, which has resulted in a significant reduction in the use of monetary bail.

In order to assess risk, the CJRA utilizes a Public Safety Assessment ("PSA"). In particular, the State adopted a PSA developed by Defendant the Laura and John Arnold Foundation. The PSA is a data-based method that helps courts assess the risk that the criminal defendant will fail to appear for future court appearances or commit additional crimes and/or violent crimes if released pending trial. After scores are assessed, a decision-making framework proposes pretrial conditions to manage the risk. Although the trial judge must consider the PSA scores and pretrial conditions recommendations, the court makes the ultimate decision on conditions of release or detention after considering a variety of factors besides the PSA.

The Complaint alleges that in the first six months of 2017, New Jersey courts granted 3,307 motions for pretrial detention and approximately 18,000 individuals were released on non-monetary conditions…. Plaintiff claims that on April 5, 2017, Jules Black was arrested by the New Jersey State Police and charged for being a felon in possession of a firearm. Plaintiff alleges that Black was released on non-monetary conditions the following day because he had a low PSA score. Three days later, Black allegedly murdered Christian Rodgers. At the time of his death, Rodgers was 26 years old and is survived by his mother, Plaintiff June Rodgers, who brings this lawsuit both individually and on behalf of her son….

The New Jersey Products Liability Act (PLA) requires plaintiffs suing under the PLA to prove "by a preponderance of the evidence that the product causing the harm was not reasonably fit, suitable or safe for its intended purpose because it[:]

"a. deviated from the design specifications, formulae, or performance standards of the manufacturer or from otherwise identical units manufactured to the same manufacturing specifications or formulae, or

"b. failed to contain adequate warnings or instructions, or

"c. was designed in a defective manner."

The Restatement (Third) of Torts includes in the definition of product non-tangible items such as "other items":

"For purposes of this Restatement: (a) A product is tangible personal property distributed commercially for use or consumption. Other items, such as real property and electricity, are products when the context of their distribution and use is sufficiently analogous to the distribution and use of tangible personal property that it is appropriate to apply the rules stated in this Restatement. (b) Services, even when provided commercially, are not products. (c) Human blood and human tissue, even when provided commercially, are not subject to the rules of this Restatement."

The Court finds that the PSA is not a product as defined by the PLA. It is neither a tangible product or a non-tangible "other item" as contemplated by section 19 of the Restatement of Torts and it is not distributed commercially. The Court has considered Plaintiff's argument that the PSA, as a matter of policy, should be considered a product analogous to approaches of the First and Fifth United States Court of Appeals, which are "moving toward liability of technological systems." Plaintiff's arguments are misplaced, however. Plaintiff cites Lone Star Nat. Bank, N.A. v. Heartland Payment Systems, Inc., 729 F.3d 421 (5th Cir. 2013) (whether economic loss doctrine barred negligence claims against a bank that had its security software breached by computer hackers), and Patco Constr. Co. v. People's United Bank, 684 F.3d 197 (1st Cir. 2012) (whether a bank's security procedure was commercially reasonable under the UCC), neither of which are products liability cases.

Rather, the PSA constitutes information, guidance, ideas, and recommendations as to how to consider the risk a given criminal defendant presents. The PSA essentially is a nine-factor rubric that uses "information gathered from [an eligible defendant's] electronic court records" to "measure the risk [he or she] will fail to appear in court and the risk he or she will engage in new criminal activity while on release," in an effort to provide New Jersey judges with objective and relevant information that they can use as one factor—among several—in making decisions about pretrial-release conditions. As such, the PSA does not "thwart" the role of judges and prosecutors, as Plaintiff contends.

Under the First Amendment, information and guidance such as that reflected in the PSA are not subject to tort liability because they are properly treated as speech, rather than product. See Restatement (Third) of Torts § 19 cmt. d (noting that courts "express[ ] concern that imposing strict liability for the dissemination of … information would significantly impinge on free speech"). Accordingly, Plaintiff's claims of products liability fail at the outset.

While the Court need go no further, Plaintiff also has failed to plausibly allege proximate causation required for products liability claims. Importantly, the discretionary decision of a judge on whether or not to detain an accused individual, in every case, creates an obstacle for finding proximate cause. By New Jersey statute, the judge is required to consider many different pieces of information in addition to the PSA score; the judge then has complete discretion to reject the recommendation to which the PSA contributes. That is, the PSA does not supplant judicial decision making but merely informs a judge's decision of whether to release or detain a defendant pending trial. This obviates Plaintiff's argument that the PSA was defective in that it omitted risk indicators of firearm possession and sex-crimes….

Editor's Note: We invite comments and request that they be civil and on-topic. We do not moderate or assume any responsibility for comments, which are owned by the readers who post them. Comments do not represent the views of Reason.com or Reason Foundation. We reserve the right to delete any comment for any reason at any time. Report abuses.

37 responses to “No Product Liability for Risk Assessment Tool Used in Deciding Whether to Release Arrestees Before Trial

  1. I suspect as these types of algorithms proliferate in the criminal justice system, they will consistently peg black Americans as a higher risk if that input is allowed, which makes the outcome of this case interesting from the perspective of what would happen if someone sued about racial discrimination with regards to a PSA score.

    1. I wonder what these programs report. Is it just some numerical score? Does it explain its reasoning, what factors went into the recommendation, show how they added and multiplied and so on?

      1. Á àß äẞç ãþÇđ âÞ¢Đæ ǎB€Ðëf ảhf is asking the right question. Some of the commercial ones are, I gather, black boxes. I would not consider their use to be due process.

    2. That can sneak in the back door even with race out of the explicit calculations. If you live in a neighborhood that the police sweep regularly, the “objective” data on how often you’ve been arrested will make you look like a higher risk. Details matter.
      The results in real life are mixed. Some algorithms spit out higher risk scores for marginalized groups. On the other hand there’s been a jurisdiction where the result was fewer lockups for the marginalized.

      1. An algorithm is created based upon a data set in which:
        -People with X skip bail more often than people without X.
        -But, because X is forbidden knowledge, algorithm is not provided with X, and works with all other data provided.

        If this outcome were to occur:
        -People with X are are denied bail more often.
        -Yet, of those offered bail, people with X have the same rate of skipping bail as those without X.

        Question:
        -Was the algorithm fair for the above result?
        Both the positive and negative predictive values were the same for the X and non-X group.
        Would it somehow be more fair if people without X were denied bail even with low risk in order to compensate for holding high risk people with X?

        1. That was a good exercise in logic, and tough questions. To your final question, I would say no, at least from a moral perspective.

        2. I think most people who are unfamiliar with the criminal justice system will say that X should or shouldn’t be considered based on whether (a) they associate X with immoral behavior; and (b) whether they believe X is under the perpetrator’s control.

          Even if X were a perfect predictor of a person’s likelihood of skipping bail if it doesn’t satisfy (a) and (b) above then it would be viewed as improper.

          1. It would be fun to apply that to male/female differences, i.e. sex discrimination.

            Males are more violent. Is that immoral? Can the male defendant claim that his violence was not under his control because of his sex?

            1. >Males are more violent. Is that immoral?

              No.

              >Can the male defendant claim that his violence was not under his control >because of his sex?

              No.

              But good questions, in that if a PSA score would ever allow a racial component of a score, it would also have to include one for sex.

            2. about like “black rage”

            3. Males are more violent. Is that immoral?

              I think people generally associate masculinity with immoral behavior.

              There’s an outcry against punishing blacks at a higher rate than whites, but no outcry against punishing men at a much higher rate than women.

        3. 1. We don’t know.

          Say we have 100 white defendants and 100 black ones. (Let’s not be coy). The algorithm, which doesn’t know the race of the defendants, releases 50 blacks and 60 whites. Twenty percent of each group of released individuals, 10 blacks, 12 whites, skip out. I don’t see that this tells us much about what would have happened had sixty blacks been released.

          Those who skip are false positives – they shouldn’t have been released. Those who are not released are negatives – no release – and some of them are probably false negatives – they could safely have been released. So the false positive rate is the same for whites and blacks. But that doesn’t mean the false negative rate is the same.

          2. No. It wouldn’t be fair at all.

    3. “they will consistently peg black Americans as a higher risk if that input is allowed”

      Well, if we want to use this type of empirical analysis to predict someone’s risk level, why shouldn’t race be allowed if it results in a more accurate prediction?

      1. Think of it like a credit score, where one race has a consistently lower credit rating, even though race isn’t used to determine credit ratings, and you’ll have your answer if you know how society reacts to that piece of information.

    4. they will consistently peg black Americans as a higher risk if that input is allowed

      Not sure I understand what you are saying. Is the fact that someone is black part of the input assessed? That, IMO, could be a serious Constitutional problem.

      Or are you saying that the output will be skewed disfavorably to blacks? In essence, a disparate impact argument. (Less problematic, IMO.)

  2. This risk assessment tool was from a charitable foundation. Others are sold. They are software packages that travel in commerce and by non-legal standards would be considered “products”.

    1. Yeah, that part of the opinion didn’t make sense.

      Just because the market is small (judiciaries), doesn’t make it not a product.

  3. Given the large number of cases it ought to be possible to determine how the algorithm’s results differ from those made by judges before it was used.

    For starters, compare crimes committed by those released under the two systems. Then consider detention costs, impact on those detained, and probably a few other things.

    I’m not sure the algorithm ought to be a black box. I guess there is a conflict between protecting IP and a desirable level of transparency, but given that there are competitive products how does a jurisdiction decide which to choose?

    1. From what I can tell from the article, before the algorithm was used judges relied primarily on monetary bail – the broken and unfair system that the constitutional amendment was designed to address. I suspect that there is no database of comparitor cases under the “alternative” system – that is, risk-based/non-monetary but without this rating engine’s use.

      Well, other than the database of cases that were presumably used to train the rating engine, that is. But the training database can’t be used as a comparitor. Absent major programming error, it will always show a match. It’s no good for validation.

  4. Why are we waiting until people are accused of crimes before we use these wonderful tools? Think of how much crime we could prevent if we ran everybody’s PSA score! People with scores above a certain level could be required to wear a badge or something, kept out of universities, and maybe restricted to certain neighborhoods. Maybe we could even send them to institutions to try to correct their criminal tendencies. Imagine how much safer the children would be!

    1. The Chinese are already ahead of you on that.

    2. Now there’s a satirical way of putting an objection I’ve been struggling to make. Denial of bail, and denial of release under this substitute for bail, are being used as a legal loophole to impose preventative detention when doing so would otherwise be plainly unconstitutional and unjust.

      It’s a very human temptation to say “Screw justice! I want my safety against those scary Bad ‘Uns!” Especially when people can deny to themselves that they are actually screwing justice. And in fact this lawsuit isn’t over the risk assessment system letting the accused guy flee from justice, but over the system not “properly” imposing preventative detention against a crime-yet-to-be-committed.

  5. The implications of this opinion are huge. It’s not that current product liability law doesn’t cover information systems because information systems don’t meet the the legal definition of a product in New Jersey. It’s a much broader claim – that informations are speech and hence constitutionally protected from liability.

    But if information systems are constitutionally inherently immune from liability because they are speech, how can drawings, and an architect or engineer who produces drawings, not be immune from liability because drawings are art? After all, architecture also communicates ideas.

    What makes an architect liable for ideas that result in buildings collapsing on people is that the architect’s ideas are not treated as pure ideas. They are translated into specific, concrete action. And instructions to commit a specific, concrete action, like solicitation to commit a specific, concrete action, are not protected by the First Amendment.

    Thus the developer of an autopilot system that results in a plane crashing should not be shielded from liability because an autopilot system constitutes speech.

    This is not, for constitutional purposes, any different. A system that determines a specific concrete action is action, not speech.

    There may well be good policy reasons why an information system of this nature should be shielded from liability, or why the threshold for liability shouldn’t permit finding liability in this case. Human behavior, even more so than the weather, is notoriously unpredictable, and character hard to judge. Judges have absolute immunity in no small part because they often make incorrect predictions in deciding who will and who won’t repeat offend.

    But these are policy considerations, not policy ones. If an information directs a specific concrete action, what it directs is conduct, not speech. The proper analogy is solicitation, not abstract advocacy. It is not protected by the first amendment.

    1. I’m not sure your analysis is entirely complete here, nor do I agree with the court’s analysis.

      An architect isn’t liable because he designs a bad building, he’s liable because the people who built the building relied on his design. Architects can design all sorts of awful buildings without liability; it is only once those bad designs actually get built that liability attaches.

      The builder is liable because of his conduct – he built a bad building. The architect is liable because of his conduct – he designed a bad building for the builder to build. The architect’s liability is contingent on the builder’s liability. If the building is sound then there’s no liability.

      The problem here is not that the product is itself not speech (as you argue). The problem is the state doesn’t bear any liability for making bad decisions on granting bail.

      1. The issue I was addressing wasn’t why he is or should be liable. I think there are good policy grounds to suggest he shouldn’t. My point was why what he did isn’t protected by the First Amendment.

        When an architect instructs people to carry out plans, as distinct from writing them in the abstract, what he does is more like solicitation than advocacy. At that point, it doesn’t matter to the First Amendment analysis if the building is actually built. If a flaw is spotted at an earlier stage, he might still be be liable for something (or at least what he did isn’t protected by the First Amendment) even if the building isn’t built.

        1. Why isn’t the result reached by the software advocacy? The court has no obligation to rely on the result, it can ignore the result if it wants to.

          It certainly seems to toe the line between protected and unprotected speech.

          1. The Carleonne family consigliere advises Michael Carleone to kill Moeg Green. Advocacy, right? Since Michael Carleome can disagree and decide not to, it’s simply information and advice, protected but he First Amendment. Right?

            Wrong. Once there is a nexus between speech and a specific, concrete course of action, if that course of action can be made illegal, so can speech advising or counseling to do it, not just speech ordering it.

            In the Palladin Press case over the book Hit Man, the book on how to perform a contract murder was entirely Advice. Nobody was being required to follow it.

            Of course the underlying course of action here is different from a mafia consigliere or a contract murder manual. You could argue that in the specific context, advice to a judge whether to jail or release someone shouldn’t be illegal. But my point is that the judge made a very broad claim – information systems are inherently immune from product liability because what they do is speech by its nature. And that’s just not so.

      2. I’m not arguing the information system in the case is speech. The judge said that. I’m explaining why I think the judge was wrong. My argument is exactly that for First Amendment purposes it’s conduct, not speech.

    2. Just because your comment reminded me of a question I asked myself after watching the news not long ago. Why are the Boeing code writers not being being held responsible for the bad code that led to the two most recent crashes of their aircraft?

      1. Who knows? Maybe the aircraft was designed with inherently unstable aerodynamics, making aerodynamics designers the leaders in the liability race. Maybe the imperfect code makes the imperfect aerodynamics better, but not perfect. Then what?

        1. Well, the accidents happened to foreign airlines overseas, but I suppose had it happened in the U.S. we’d see a lawsuit or three, and maybe a criminal prosecution.

          1. As to it happening in the US, US Pilots that fly this specific aircraft get training specific to that aircraft including what to do if the stall prevention system fails. From what I’ve seen elsewhere, A lot of the foreign pilots aren’t getting that extra training.

        2. Because it’s a hardware issue not a software issue. There is no redundancy in the actual sensor that provides the data the software uses, so no opportunity for the software to detect hardware failures resulting in false readings.

          By the way, if the pilot is not paying attention, and doesn’t know that particular aircraft well enough, a false negative from the sensor can be just as dangerous as a false positive.

      2. One reason might be that Boeing outsourced/offshored much the code writing to India (they started moving some of that work back to the US a little over year ago). I’ll guess suing some bottom-level people in India isn’t a very profitable legal endeavor.

        Another is that the code writers very likely wrote the code to the specifications of someone else. If the coders suggested “hey, maybe if the sensors disagree, we should do something else besides believing only one of the sensors” they likely would be simply ignored as non-experts on safety features.

        1. “If the coders suggested “hey, maybe if the sensors disagree, we should do something else besides believing only one of the sensors” they likely would be simply ignored as non-experts on safety features.”

          From what I’ve read, there are only 1 or two sensors on the aircraft. With just 1, there is no possibility of false reading detection at the software level and even with 2, it’s not really possible, because there is no way to tell which sensor gives the false reading.

          1. All 737s have two angle of attack sensors (the ones that appear to be the problem). The 737max8 system only used one of them at a time. It did not check if they disagreed. So if the sensor was giving bad readings (but had not failed completely), the software still used it.
            As someone who actually worked on fault-tolerant systems, to me that seems surprising, but that was one of the two basic problems – the other was allowing the automatic override to tilt the stabilizers a much larger amount than the original design.

Please to post comments