The Volokh Conspiracy
Mostly law professors | Sometimes contrarian | Often libertarian | Always independent
Have You Used Generative AI to Represent Yourself in Court Filings? Tell Me Your Story
I know many nonlawyers are using generative AI to represent themselves in litigation. Some of them I learn about because courts spot hallucinated citations in their filings; but I expect that many others are much more careful, and that some may have actually been relatively successful. I doubt that generative AI is good enough right now to match a competent trained lawyer's written work all by itself. But for many self-represented litigants, the question is how AI—with the litigant's checking and editing—compares to just their own untrained selves, not how AI compares to a competent but unaffordable lawyer.
I'd like to learn more about this; if you or someone you know has litigated as a layperson with AI help, I'd love to hear the details. (If I write about it, I will not publish your name or other identifying details, unless you want me to.) What did you find the AI did well? What did you find it did badly? What did you do to try to improve the AI drafts, and do you think that worked out well? I'd like to know all this whether you won or lost.
Please e-mail me at volokh at stanford.edu if you have something you'd like to pass along. Thanks!
Editor's Note: We invite comments and request that they be civil and on-topic. We do not moderate or assume any responsibility for comments, which are owned by the readers who post them. Comments do not represent the views of Reason.com or Reason Foundation. We reserve the right to delete any comment for any reason at any time. Comments may only be edited within 5 minutes of posting. Report abuses.
Please
to post comments
I posted this video a while back ... kinda funny.
https://www.youtube.com/shorts/MkmfZPt-gaw
More info here. https://www.theverge.com/news/646372/ai-lawyer-artificial-avatar-new-york-court-case-video
I feel like this post is related enough to hijack in order to ask a question that's been on my mind for a while:
Is anyone else encountering this new trend? The court issues a ruling on a motion. The litigant (not a lawyer) then runs the order through an LLM model like ChatGPT, Claude, or Grok, and asks it to identify everything wrong with the order. The LLM dutifully spits out something. The litigant then submits that output back to the court, asking the court to reconsider the ruling, because the LLM says the court was wrong.
If you have encountered this: how have you reacted as counsel? How has the court reacted? How much time needs to be wasted saying "an LLM is not even an attorney, it definitely isn't an appellate court" ?
When judges use this to generate their opinions, then why bother with courts ? Send all info into slot A, and receive answer from slot B. No lawyers and no judges, so cost is reduced.
Eugene, this is a good start. I hope your plea generates a lot of data. But I’m more interested in a hoped-for follow-up:
Why are lawyers using this? Some
possible answers:
- Bettter quality work product than I can produce myself.
- Enhancing my apparent productivity;
- Enhancing the apparent productivity across the firm (management says to use it)
- I was short on time with a brief due;
- Everyone else is doing it.
A bit off topic, but I am reminded of the day when some people were opining that if a computer ever beat the best humans at chess (or pick your favorite intellectual pursuit) it would mark a failure of the human intellect.
My take at the time, which is unchanged, is that as tool-building animals, if we failed to build a tool that accomplished the task at hand better than what we could do without the tool, that would be the true failure of our species.
In that spirit, it's very likely that AI will eventually write better filings than actual attorneys, perhaps making them obsolete or at least a relatively inexpensive commodity service.
Five years ago, I was convinced that every truck driver, cab driver, bus driver, and delivery driver would be put out of work by the activities of us software developers. I certainly didn't foresee that junior software developers would be the first "victims" of this trend. The question now is whether attorneys or truck drivers will be replaced first.
I do not use AI for court filings but I use them for legal work all the time and I have a very good idea of what they do well and what they do not do well.
I use Gemini to review investment documents. Typical investment docs for a Series A or Series B investment total well over 100 pages of legalese. SAFEs for pre-seed are supposed to be standard and simple but an appalling number of startups hire lawyers who decide to earn their fee by customizing the SAFE and then you need to do a legal review of them.
I ask three kinds of questions:
1. Please convert these legal documents to a summarized term sheet for me to review.
2. Please tell me if there is anything problematic in these documents that I should be concerned about as an investor (in the current round / in a previous round)
3. Please confirm that there is a clause for X (ie. liquidation preference), summarize the terms, and tell me if there are any issues with it.
For SAFEs I also always ask "Please review the capitalization clause and confirm that it matches exactly what I would expect based on whether the SAFE is pre-money or post-money."
Gemini has caught many stupid drafting errors by lawyers. I send an email back asking "What does Clause X mean? I can't figure out what you are trying to do."
"Our lawyer put it in."
"Well, ask him."
"Oh... our lawyer wants to change it now."
Gemini also catches many problematic issues - contracts without choice of law clauses, contracts with unreasonably limited rights for minority shareholders, contracts without pre-emption rights, etc.
Gemini also has a tendency to hallucinate standard contract clauses into contracts that do not have them. This makes perfect sense - it has been trained on millions of contracts with arbitration clauses so when it gets a contract with no dispute resolution section it sometimes hallucinates an arbitration clause into the contract. So whenever I ask "Is there a clause X in the contract?" I ask it to quote the clause and then do a search in the contract to find the exact clause. Trust but verify.
I have also used Gemini to analyze complex questions about what transactions with a self directed IRA are prohibited and which ones are subject to Unrelated Business Income Tax (UBIT). There again, mixed results.
I used this to design a complex structure for and international lending project using money from an IRA (since who wants to pay ordinary income tax rates on interest?). Part of the transaction involves using the IRA money held in a US bank account as collateral for a third party to borrow money in another country with an unstable currency. This acts as a hedge against devaluation. Gemini flagged this as a prohibited transaction. The IRS regulations it pointed to are all in a section about self dealing and seem to prohibit using IRA money as collateral for a personal loan, which is very different. I am now asking a lawyer to determine if this is or is not a prohibited transaction. So you still need lawyers to double check Gemini. But I am asking much more intelligent questions and saving a huge amount of time on back and forth and discussion to figure out what to do and how to do it by using Germini first.
Wouldn't Notebook LM be a better product from Google as an LLM that you can feed data into with reduced (not none) hallucinations. I think using it as a tool to do small work is smart obviously cites must be checked and you can always ask LLMs about their own internal logic and why they are structuring arguments the way they are then tailor the output or argument...
Lawyer here, so this isn't really responsive to the question.
I tried using Microsoft Copilot as an aid in writing responses to various motuins. It was okay for finding the appropriate rule or statute, but its interpretation of case law was creative. The cases were real, but they often didn't say what Copilot said they said, and if I asked for quotes from the cases, they were often not to be found anywhere in the actual opinion. Copilot also asked if I wanted to put its output in the form of a motion or response to a motion, which I imagine might lead nonlawyers to think it's actually capable of writing a proper brief suitable for filing.