The Volokh Conspiracy

Mostly law professors | Sometimes contrarian | Often libertarian | Always independent

AI in Court

Will AI Make Law Productive?

|

This is my third and final installment summarizing the arguments in my draft article The Cost of Justice at the Dawn of AI. In the first, I reviewed Baumol's cost disease's implications for the legal sector. Baumol recognized that if the productivity of any sector improved less than the productivity of the economy as a whole, the goods or services from that sector would become more expensive. In the second, I assessed whether the legal sector has stagnated in this way. This turns out to be difficult or impossible to measure conclusively, because it's hard to assess whether legal work is improving in quality. But crude measures like consumer price indices suggest stagnation. Rapidly decreasing trial rates provide further evidence. It should not be surprising that fewer cases, civil and criminal, make it to trial if legal process is getting more expensive.

Will the trial continue vanishing? Or might we witness an increase in trial rates? The answers to these and other important questions depend on the future path of legal productivity. If legal productivity stagnates, then legal services will become more expensive. Fewer litigants will be willing to bear the cost of trial, and ever fewer cases will be tried. This matters not only because the trial may be seen as the canonical exhibition of our system of justice. It matters also because settlement in the shadow of trial may be only a crude approximation of the results of a hypothetical ideal legal system. Settlements have some advantages over trial, reducing randomness in selection of judges and juries and in their decisions. But the more expensive trial is, and the more asymmetric variables such as trial costs and risk aversion, the less effective the settlement market will be as a tool of justice. The legal system may react, for example by allowing more class actions. But the reluctance of courts to allow heterogeneous classes reflects that class actions too are a crude device.

If, however, AI improves legal productivity relative to the productivity of the overall economy, the legal system may better be able to achieve its aims. It will be better able to make distinctions, both at trial and in the shorter shadows of the law in which settlements would be negotiated. This is intuitively obvious in the most optimistic scenarios, in which robojudges effectively sift through evidence and reach judgments that are close to what groups of carefully deliberating, representative human actors would decide. But it's also true in less world-changing scenarios, in which large language models ease the work of lawyers, but humans are still essential to legal decision-making. As long as the productivity increases of the legal sector exceed those in other sectors—some of which may also benefit from such models, as well as from other technologies, such as robotics—then legal services will become cheaper, and access to justice, greater.

Can large language models and other AI tools materially increase legal productivity within the next decade or two? The article details both the negative and affirmative cases. The negative case highlights that basic problems, such as the tendency of large language models to hallucinate, remain outstanding research questions. Even if that can be overcome, ChatGPT's legal writing is cliché and dull, and its accuracy on complex legal questions is limited. The affirmative case suggests that these problems may be overcome by a combination of hardware and software improvements.

Perhaps the greatest uncertainty lies in the potential for synthetic data to supplement human-produced writing and improve model training. Otherwise, within a few years, we will run out of data on which the models may be trained. Eventually, investments in hardware and the continuation of Huang's Law will make it possible to generate a multiple of human-written texts. The question then becomes whether this material will be sufficiently high in quality to allow for improvements from one training generation to the next. I suspect that it will, at least if techniques like chain-of-thought prompting are used to improve on raw unimproved model output. As long as large language models can identify when writing is better than what it is able to produce, they should be able to improve. But it may be a long time before such gradual improvements enable production of legal writing at the highest level or perhaps even at the level of an average lawyer. The degree of progress in the near future thus remains uncertain, even if one accepts as inevitable that some form of artificial intelligence will eventually at least equal the most skilled humans in every intellectual domain.

If law has stagnated but might become productive, that leaves a range of possible scenarios for the short-term future. The last part of my article explores how the law might adapt. Continued stagnation suggests such remedies as lowering the requirements for class actions, encouraging greater use of arbitration, and embracing access-to-justice measures such as facilitated self-representation. A productive turn, on the other hand, might suggest moving in the opposite direction, leaning on legal procedures that ensure individualized justice. One way that the law might adjust to both possible futures is to rely on and develop provisions that explicitly or implicitly take the level of legal costs into account in determining how much legal procedure should be granted.