The Volokh Conspiracy
Mostly law professors | Sometimes contrarian | Often libertarian | Always independent
Will AI Make Law Productive?
This is my third and final installment summarizing the arguments in my draft article The Cost of Justice at the Dawn of AI. In the first, I reviewed Baumol's cost disease's implications for the legal sector. Baumol recognized that if the productivity of any sector improved less than the productivity of the economy as a whole, the goods or services from that sector would become more expensive. In the second, I assessed whether the legal sector has stagnated in this way. This turns out to be difficult or impossible to measure conclusively, because it's hard to assess whether legal work is improving in quality. But crude measures like consumer price indices suggest stagnation. Rapidly decreasing trial rates provide further evidence. It should not be surprising that fewer cases, civil and criminal, make it to trial if legal process is getting more expensive.
Will the trial continue vanishing? Or might we witness an increase in trial rates? The answers to these and other important questions depend on the future path of legal productivity. If legal productivity stagnates, then legal services will become more expensive. Fewer litigants will be willing to bear the cost of trial, and ever fewer cases will be tried. This matters not only because the trial may be seen as the canonical exhibition of our system of justice. It matters also because settlement in the shadow of trial may be only a crude approximation of the results of a hypothetical ideal legal system. Settlements have some advantages over trial, reducing randomness in selection of judges and juries and in their decisions. But the more expensive trial is, and the more asymmetric variables such as trial costs and risk aversion, the less effective the settlement market will be as a tool of justice. The legal system may react, for example by allowing more class actions. But the reluctance of courts to allow heterogeneous classes reflects that class actions too are a crude device.
If, however, AI improves legal productivity relative to the productivity of the overall economy, the legal system may better be able to achieve its aims. It will be better able to make distinctions, both at trial and in the shorter shadows of the law in which settlements would be negotiated. This is intuitively obvious in the most optimistic scenarios, in which robojudges effectively sift through evidence and reach judgments that are close to what groups of carefully deliberating, representative human actors would decide. But it's also true in less world-changing scenarios, in which large language models ease the work of lawyers, but humans are still essential to legal decision-making. As long as the productivity increases of the legal sector exceed those in other sectors—some of which may also benefit from such models, as well as from other technologies, such as robotics—then legal services will become cheaper, and access to justice, greater.
Can large language models and other AI tools materially increase legal productivity within the next decade or two? The article details both the negative and affirmative cases. The negative case highlights that basic problems, such as the tendency of large language models to hallucinate, remain outstanding research questions. Even if that can be overcome, ChatGPT's legal writing is cliché and dull, and its accuracy on complex legal questions is limited. The affirmative case suggests that these problems may be overcome by a combination of hardware and software improvements.
Perhaps the greatest uncertainty lies in the potential for synthetic data to supplement human-produced writing and improve model training. Otherwise, within a few years, we will run out of data on which the models may be trained. Eventually, investments in hardware and the continuation of Huang's Law will make it possible to generate a multiple of human-written texts. The question then becomes whether this material will be sufficiently high in quality to allow for improvements from one training generation to the next. I suspect that it will, at least if techniques like chain-of-thought prompting are used to improve on raw unimproved model output. As long as large language models can identify when writing is better than what it is able to produce, they should be able to improve. But it may be a long time before such gradual improvements enable production of legal writing at the highest level or perhaps even at the level of an average lawyer. The degree of progress in the near future thus remains uncertain, even if one accepts as inevitable that some form of artificial intelligence will eventually at least equal the most skilled humans in every intellectual domain.
If law has stagnated but might become productive, that leaves a range of possible scenarios for the short-term future. The last part of my article explores how the law might adapt. Continued stagnation suggests such remedies as lowering the requirements for class actions, encouraging greater use of arbitration, and embracing access-to-justice measures such as facilitated self-representation. A productive turn, on the other hand, might suggest moving in the opposite direction, leaning on legal procedures that ensure individualized justice. One way that the law might adjust to both possible futures is to rely on and develop provisions that explicitly or implicitly take the level of legal costs into account in determining how much legal procedure should be granted.
Editor's Note: We invite comments and request that they be civil and on-topic. We do not moderate or assume any responsibility for comments, which are owned by the readers who post them. Comments do not represent the views of Reason.com or Reason Foundation. We reserve the right to delete any comment for any reason at any time. Comments may only be edited within 5 minutes of posting. Report abuses.
Please
to post comments
"The degree of progress in the near future thus remains uncertain, even if one accepts as inevitable that some form of artificial intelligence will eventually at least equal the most skilled humans in every intellectual domain."
If we're in an intellectual arms race with AI, then fight fire with fire - use AI to enhance the human brain.
As an aside, I proposed a Turing-like procedure to speed up trials and improve their accuracy in 2011; see: "The Turing Test and the Legal Process", Information & Communications Technology Law, vol. 21, no. 2 (June 2012), pp. 113-126
Link to my paper here: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=1978017
To a serious student of Law and the American Founding the chief difference BY FAR is that in , say, Massachusetts during the ratification of their constitution and the US Constitution, the average citizen could understand and know the law. It had a coherence and development from first principles that the housewife and farmer could grasp. So now Supreme Court has said what I don’t know anyone who doubted , that Trump can’t be kept off the ballot. No normal person would ever say that a person who respects the Constitution would vote for a man when the vote itself in their eyes violated the Constitution !!!!!!!
The Pareto Principle, or 80/20 rule, says that 20% of the situations take up 80% of the cost and effort. Law might be an exception to that rule.
Automation in general tends to work best for the typical cases. A potential difficulty with automating potentially high-liability fields like law is that losses and liabilities from the occassional mistake could end up costing far more than the savings from speeding up the typical cases.
And in the current state of AI tools, mistakes seem to occur far more commonly than just occassional.
I look forward to my 2027 career drafting high quality bespoke training data for the LLM which replaces me in my job as a lawyer.
Work harder of by golly you'll go up for malpractice by proxy when that LLM posts spurious case-cites!
"Will AI Make Law Productive?"
No, Artificial Stupidity will not make the practice of law more productive.
Oh, that's really interesting. If law acquired net-positive productivity, it would become an element of many day-to-day jobs rather than just an annoying but necessary exceptional circumstance. You could file a test case for every minor decision.
7 comments,because lawyers genarally hate STEM subjects. AI is just a marketing phrase to them, something to argue over.
My study started over 40 year ago for a BS in computer science. One thing will help law that is under the AI umbrella,namely expert systems. But there isn't a speck of intellilgence in them,excellent as they can be
I like to watch the development of AI technologies in industries such as the fashion world, marketing and everything related to advertising of products, goods, and services. Now there are already Top AI Influencers, more of which are on the site topaiinfluencers.io. But at the same time, I’m interested in how much trust there is in advertising something created by AI influencers.