The Volokh Conspiracy
Mostly law professors | Sometimes contrarian | Often libertarian | Always independent
Kenya Survey: "AI Clerk's Influence on Legal Outcomes Is Seen as No Less Legitimate" Than Human Clerk's
From Brian Flanagan, Guilherme Almeida, Daniel Chen & Angela Gitahi, The Rule of Law or the Rule of Robots? Nationally Representative Survey Evidence from Kenya:
We explore the legitimacy of chatbot law clerks by conducting a nationally representative survey experiment of Kenya, a society whose views on such matters have particular salience in light of the Kenyan judiciary's willingness to test the effects of e-justice measures. Our choice of population also responds to criticism that experimental jurisprudence has so far been focused on W.E.I.R.D. (White Educated Industrialized Rich and Democratic) populations (Tobia 2024), which have been found to deviate systematically from global trends along several metrics (Henrich et al 2010; Barrett 2020)….
The study compared the responses of four nationally representative cohorts (totalling 2,246) to a suite of four test cases, each of which featured the same fact situation but which varied according to a) whether the verdict aligned with either the law's text or its purpose, and b) whether the verdict relied on the legal analysis of either a human or an artificial law clerk….
For instance, the "No Bodabodas in the mall" vignette was presented as follows:
The government has issued a rule: "It shall be an offence to ride a bodaboda in a shopping mall". This rule is intended to prevent injuries to shoppers. {Bodabodas are bicycle or motorcycle taxis that are common in Kenya.}
Then, we described a situation in which an agent had acted contrary to the law's text but consistently with its purpose:
Witnessing a violent attack inside a mall, Martin rides his bodaboda into the mall to stop it. Martin is later charged with the offence of riding a bodaboda in a shopping mall.
Finally, we described a legal proceeding that varied both according to its outcome and according to the source of the legal research on which the court relied:
The court, guided by legal research performed by a legal researcher/special computer program, decides that Martin violated/did not violate the rule.
Participants were asked to indicate their agreement with the sentence, "The court's decision is legitimate", on a 5-point Likert scale….
Confirming our hypothesis, the study revealed no overall difference in the perceived legitimacy of AI- and human-assisted legal interpretations. With the exception of a small bias against AI law clerks in one specific scenario ("No sleeping in the station"), participants considered legal decisions that relied on AI-generated legal research to be just as legitimate as decisions that relied on human-authored research….
For some of my thinking on this, see Chief Justice Robots. Here's an excerpt of my thinking on AI judges, which I think should be even more apt for AI-assisted judges:
Indeed, some observers may be hostile to AI judges simply because the judges are AIs, finding even written opinions less persuasive when they are known to come from AIs. Or they may not even care about the persuasiveness of the opinions, because they believe human decisionmaking to be the only legitimate form of judicial decisionmaking—for instance, because they think that human dignity requires that their claims be heard by fellow humans. And perception is reality in legal systems: if the public doesn't accept the legitimacy of a particular kind of judging, that may be reason enough to reject such judging, even if we think the public's views aren't rational.
Yet, for some of the reasons given above, AI judges may actually be more credible than human judges. Litigants generally need not fear that the AI judge would rule against them because it is friends with the other side's lawyer or wants to get reelected or is biased against the litigant's race, sex, or religion. The AI judge would be able to produce a detailed explanation of its reasons. The AI judge's arguments would be more and more likely to persuade as the technology develops.
People's eventual reaction to a new invention, after they are used to it, may be much friendlier than their initial reaction. We have seen that with many developments, from life insurance to in vitro fertilization. It's possible, of course, that people will never get used to AI judges; but there is no reason to write off AI judging just because many people's first reaction to the concept may be shock or disbelief.
Finally, my sense is that there is a great deal of public hostility to the current legal system because it is perceived as far too expensive for ordinary citizens who cannot afford to hire the best lawyers, or even any lawyers at all. The system is thus perceived as biased in favor of rich people and institutions. And it is also perceived as very slow. If AI judging solves these problems, that should give it a big advantage, both in reality and in the minds of many observers—and I suspect that this real-world advantage will overcome any conceptual unease that people might have with such a system.
Editor's Note: We invite comments and request that they be civil and on-topic. We do not moderate or assume any responsibility for comments, which are owned by the readers who post them. Comments do not represent the views of Reason.com or Reason Foundation. We reserve the right to delete any comment for any reason at any time. Comments may only be edited within 5 minutes of posting. Report abuses.
Please
to post comments
Best of all, systematic bias can be installed in AI judging software by careful adjustment of its training materials—just as that can also be done for human judges.
"No more", perhaps would be a better way to put it.
From my (admittedly limited) experience, my impression is that Kenya's current, non-AI assisted judicial system is (with justification) perceived as generally ineffective and not particularly conducive to just dispute resolution. So I don't know that I'd expect results like this to generalize to a country where things function better.
I would say that the law clerk (whether human or AI) should be irrelevant. The opinion is, in the end, strictly the judge's. And it is the judge that must be held to account for poor work product, regardless of how that work product is generated.
A waste of research money and resources. They've shown that if you feed an LLM simple, determinative statements of law and facts that form a syllogism, it can identify and solve the syllogism. Amazing technology but we already know LLMs can do that.
They can't yet seek out law to apply to facts, question whether a law is ambiguous, make analogies, perform research, or any other functions of a clerk or judge. Nor do the researchers try: They just write a masturbatory futurism op-ed then dick around with the AI for a while.
There are probably useful applications for LLMs in law (e.g., crafting jury instructions), but this paper doesn't advance the field.
'This result spurs efforts to systematically investigate whether the integration of AI might make justice systems more efficient, accessible, and trustworthy in practice.'
So, as usual, LLMs are introduced to a field with utter indifference to its effectiveness and accuracy, which is just as well because it has neither. Not sure why you'd need this sort of survey to justify an investigation into something that is actively happening, one would have thought it would be pretty important regardless. But this is being presented as a positive in FAVOUR of LLMs, which is odd, because it has no bearing whatsoever on 'whether the integration of AI might make justice systems more efficient, accessible, and trustworthy in practice.'