The Volokh Conspiracy
Mostly law professors | Sometimes contrarian | Often libertarian | Always independent
ChatGPT Coming to Court, by Way of Self-Represented Litigants
I posted earlier today about a lawyer's filing unchecked ChatGPT-generated material, complete with hallucinated cases. But even if lawyers manage to avoid that, I'm sure that many self-represented litigants will be using ChatGPT, Bard, and the like, and won't know to properly check the results.
Indeed, a quick CourtListener search pointed out three self-represented filings (1, 2, 3) that expressly noted that they were relying on ChatGPT. That suggests that there are many more that used ChatGPT but didn't mention it. (To my knowledge there's no requirement to disclose such matters.)
Note also that self-represented litigants are quite common: Even setting aside prisoner filings (since I'm not sure how many prisoners have access to ChatGPT), in federal court, "from 2000 to 2019, … 11 percent of non-prisoner civil case filings involved plaintiffs and/or defendants who were self-represented." And I expect this would be even more common in state courts, for instance in divorce and child custody cases, where I'm told self-representation is even more common. (Family court plaintiffs might feel like they need to file for divorce, even though they can't afford a lawyer, and defendants might get sued for divorce or over child custody disputes even though they don't have any money that the plaintiff can recover.) And even not limiting matters to such categories of cases, it appears that "The caseload of most California judges now consists primarily of cases in which at least one party is self-represented."
See also this post from late February for an early query along these lines, in which one commenter did mention that he had used ChatGPT-3 for a state court filing; and see this post from January for the DoNotPay traffic-ticket-litigation story. If you know of any cases involving pro se litigants using ChatGPT and similar programs, please let me know.
Editor's Note: We invite comments and request that they be civil and on-topic. We do not moderate or assume any responsibility for comments, which are owned by the readers who post them. Comments do not represent the views of Reason.com or Reason Foundation. We reserve the right to delete any comment for any reason at any time. Comments may only be edited within 5 minutes of posting. Report abuses.
Please
to post comments
And what if the pro se litigants are bright enough to actually check to see if the cases exist.
There has got to be data on this
And the larger issue is that this represents the extent to which the bar no longer serves the public need/interests.
The pro se litigants are going to use the available tools. If the legal profession wants cite-checking, then it should provide the tools for that, and not blame the pro se litigant.
It exists. The Crivella West software is exactly that.
When the bar requirements makes legal representation so expensive that most of the public can't afford it, it doesn't serve the public interest.
It serves the private interest by limiting competition.
You can be sure the legal professions industry capture of regulatory control will move to limit chatGTP in courts, even if it's tendency to hallucinate is fixed l.
For what it’s worth, I think it’s been pretty conclusively established at this point that DoNotPay was based on a mechanical Turk functionality, rather than any kind of actual AI.
Action in REM vs. action in RAM, perhaps.
Mr. D.
"Ladies and gentlemen of the jury, I'm just an AI. I was created by my programmers to process information I gather. I have no independent identity of my own. But this much I do know - my client deserves at least one hundred thousand dollars in compensatory damages and one million dollars in punitive damages. Thank you."
And if there are litigants, there are judges and clerks. Can you spot the fake judicial opinion.
But wait, if a hallucinated judicial opinion slips through, doesn't it become law?
[silly joke deleted]
What was it?
Amazing, It's gone from my memory now. Too bad, it was a very good one.
Is that skewed by small claims cases? They seem reasonable to be self-represented.
Having graded thousands of undergraduate papers, some runs of which had all the hallmarks of automatic generation, I actually see the social merit of this scapegrace cheating. Where the system of education (or justice) encourages rote and systematic thinking and text generation, well, there's an app for that.
Mr. D.
Federal diversity jurisdiction depends on the size of the damages sought, not the size of the parties. See King Kong v. Godzilla.
Artificial intelligence is still an inferior product compared to higher life forms. See Daleks v. Cybermen.
Prosecution not estopped from contesting defendant's insanity plea simply because the arresting officer was also insane. See Batman v. Joker.
Altering the deal without procedural due process is not the droids we are looking for (Hutt, J., dissenting). See Calrissian v. Vader.
Anything repeated three times need not be presumed true, but the bell cannot be unrung. See Bellman v. Snark and Baker v. Boojum.
Filing against a pseudonymous defendant is at the discretion of the trial judge. See Potter v. He-Who-Must-Not-Be-Named.
One way to think about GPT is that it is a different way to search the Internet. I took the question, “I have been accused of shoplifting. I will appear in court pro se. Advise me on my defense.” and used that for a google search, and as a ChatGPT prompt. Google gave me links to law firms offering general advice. GPT gave me the following. I think it did a pretty good job. Compared to defendants in court sans any advice, it is briliant.
Someone should make self serve kiosks for typical and routine matters such as mutual custody change filings. And e-file them where supported.
I mean, none of that is bad advice, but except to the extent that it boils down to, “hire a lawyer”, it’s not really actionable either.
Nonsense. The repeated advice to hire a lawyer reflects the fact that a lot of the input texts will have been written by lawyers, but quite a bit of it is "actionable" without a lawyer..
Someone should make a black LLM, that way if the courts reject its arguments they can be canceled for being bigoted.
I worked with a 1950s era computer whose circuitry was all capacitors, resistors, and diodes; no transistors or chips. But the cards themselves were some black and some white. We told people it had integrated circuits.
For self-represented people, there should probably be a transitional period where people are allowed to get off with a warning rather than sanctions until just how bad this stuff is sinks in. And court websites and guides for self-represented parties should have new content about it in bold letters saying that is known to just make shit up that may look impressive but is just totally bogus, and using it may lose you your case. Perhaps the “you will be hosed” language is appropriate here.
And after some transitional period, they will need to start sanctioning.
I suspect judges are going to need more clerks and assistants to catch completely made-up content in legal filings pretty much startimg immediately.
At this point in time, a pro se guy isn't going to be sanctioned for relying on ChatGPT. The worst consequence he will face is to have his case dismissed, but that will be for the lack of legal support for his position rather than as a punishment.
As a non lawyer it occurred to me while reading this article that using false citations in front of a court is bad. I think what might be even worse and more sinister would be the subtle altering of existing obscure caselaw via the "new" emerging technologies. What safeguards are in place to ensure electronically stored caselaw can be trusted?
Most of the existing caselaw is in databases that are independent and well-managed, like Westlaw and Lexis. Which, I believe, they provide free to federal courts.
Plus, judicial opinions are filed with the clerk of the court, so there is always a record to check.
And if it's obscure, it is likely not controlling, and the judge can always reject it.
Quip pro quo, if you will, has always been a pet peeve of mine. Who doesn't want to see substance win over style?
As long as we live in a world where validation takes on the order of polynomial processing time, while invention takes exponential time, the AI shouldn't be much of a threat. The judicial system should be able to create validation tools that will be at least as capable as the creational tools.
My daughter's boyfriend used ChatGTP to generate a first draft of a suit (not yet and perhaps never to be filed) challenging his homeowners' association's policies that allegedly discriminate against families with young children. Not terrible as a template. But first-draft help only as he, my daughter, and I all provided suggestions for changes. ChatGTP used she/her for his pronouns (as of course a single parent is always a female in its training data?).
It isn't that family law litigants "might feel like they need to file for divorce." There are only two ways that a marriage ends: the death of a spouse or getting a court judgment that either legally dissolves the marriage or, rarely, declaring the marriage was a nullity from the start. If a married person wants to end his/her marriage and the spouse has inconveniently neglected to die, that married person has no choice other than filing a lawsuit for a divorce (or occasionally for an annulment, but that still requires filing a lawsuit).
You heard right that the parties to a majority of dissolution of marriage cases, at least in California and I'd bet in every state, are both self-represented.
The increasing presence of ChatGPT as a self-represented litigant in court raises intriguing questions about the intersection of artificial intelligence and the legal system. While it offers potential benefits, such as improved access to justice and reduced costs, concerns about the reliability and fairness of AI-driven legal arguments cannot be ignored. Striking the right balance between embracing technological advancements and safeguarding the principles of due process and human judgment will be crucial as courts navigate this uncharted territory. Try ChatGPT here: https://cgptonline.io/
nice information https://www.eds.tech/