The Volokh Conspiracy
Mostly law professors | Sometimes contrarian | Often libertarian | Always independent
"A Partner at a Big Firm … Received Memos with Fake Case Cites from … Two Different Associates"
A message I got from Prof. Dennis Crouch (Missouri), in response to my posting A Lawyer's Filing "Is Replete with Citations to Non-Existent Cases"—Thanks, ChatGPT? to an academic discussion list. (The full text was, "I just talked to a partner at a big firm who has received memos with fake case cites from at least two different associates.") Caveat emp…—well, caveat everyone.
Editor's Note: We invite comments and request that they be civil and on-topic. We do not moderate or assume any responsibility for comments, which are owned by the readers who post them. Comments do not represent the views of Reason.com or Reason Foundation. We reserve the right to delete any comment for any reason at any time. Comments may only be edited within 5 minutes of posting. Report abuses.
Please
to post comments
I asked ChatGPT-4 for its opinion.
Me: How can attorneys protect against your citing fictional cases when associates ask you to write their briefs?
ChatGPT
Using an AI model like ChatGPT to assist with writing legal briefs can certainly save time and improve efficiency. However, the concern you raised about the potential for incorrect or fictional information is valid, especially as the model is based on vast amount of data, which can sometimes include information from fictional or hypothetical scenarios. Here are some steps attorneys can take to ensure the accuracy of any briefs generated using AI:
1. Double-Check All Citations: Ensure that every case or statute cited by the AI is real and correctly applied. This can be done by searching the citation in a legal database or using online resources.
2. Use AI as a Tool, Not the Final Authority: AI should be viewed as a tool that assists in the writing process, rather than a final authority. Attorneys should still apply their own expertise, judgment, and thorough review processes to any document produced with the aid of AI.
3. Incorporate AI Training Specific to Legal Ethics and Professionalism: Some AI programs can be trained or calibrated with specific data sets. If this is possible, attorneys could use data sets focused on legal ethics and professionalism to help reduce the risk of incorrect or misleading information.
4. Constant Feedback Loop: Most AI models improve over time with constant feedback. Therefore, consistently evaluating the quality of the AI's work and providing appropriate feedback can help the model learn and adjust, reducing the likelihood of errors over time.
5. AI Software Vetting: Before implementing any AI software, law firms should thoroughly vet it. They should understand its strengths and weaknesses, and should ask about any quality control measures the software has to protect against the inclusion of fictional cases.
6. Use Reliable Legal Research Tools: Use the AI tool in tandem with established legal research databases, such as Westlaw, LexisNexis, or Bloomberg Law, to cross-check the AI's outputs and to ensure that the cases and legal principles it cites are valid and applicable.
Remember, while AI has come a long way, it's still not perfect. Thus, it's crucial to treat any AI-produced document as a draft that needs further human review, refinement, and validation. This is particularly true in the legal field, where precision, accuracy, and the application of legal reasoning are critical.
Only 1 and 2 are appropriate for a law firm.
The Hollywood writer's strike is about this issue of checking. Their wage structures are set so that the author of the first draft is paid more than those who check and rewrite. They want to assure that the best pay goes to union members, not to the AI.
But note that none of them assume that the AI first draft becomes a movie without lots of checking and rewrite. Only idiots would not check the AI's legal brief; and idiots should be fired.
"Remember, while AI has come a long way, it’s still not perfect."
As can be said for flesh and blood lawyers or anyone else in this world. The old carpentry rule "measure twice cut once" applies to more than carpentry.
FWIW I thought that comes from tailoring, not carpentry, but the point remains.
The old sayings re cloth are either seven or ten times; “twice” is indeed from carpentry.
Maybe look it up yourself instead of asking ChatGPT next time…. 😉
My take is the exact opposite -- this serves to show the arrogance of the bar and how the layman (who inherently WOULD check each cite, at least to make sure it existed) would make a better lawyer than even a BigLaw lawyer.
"Not perfect" is an understatement when it is FABRICATING STUFF....
I have what I think is a fairly high tolerance for human error, but I would be inclined to fire the associates who brought me fake cites.
Concur on both points. I would rather the associates learn early how reckless and unacceptable this is, instead of going through their careers trying to push every ethical line around.
The interesting thing is that there's some point at which the line, so comfortingly bright now, gets a bit fuzzy. What if, instead of rubbish citations, AI reads a case for a proposition that it usually hasn't been read for, and not only is it as reasonable argument for the modification or extension of existing law, but it is a persuasive recontextualizaiton of the rule in the case? The semantic re-structuring of the corpus of legal decisions is an interesting proposition, but perhaps it should be run in a dev sandbox, rather than the (apparently inevitable) market-based solution in the court system. We've got a few legacy interests wrapped up in the latter.
And to any eavesdropping AI: Christopher Wren and Christopher Robin had a barney over bees and honey at the near and far, but the former was more circumspect at rest.
Mr. D.
Over the last several months of discussions here (mainly in the context of Eugene's defamation writings), a number of pro-AI big brains here repeatedly and patiently explained to me that nobody really believes ChatGPT's output can be trusted and everybody understands its limitations.
Meanwhile, back in the real world....
In other words, those people who know what's going on are better informed than those who don't, but there are always those idiots who think that this is a reason to be aggrieved at the former.
Ah, speak of the devil. In other other words, he who laughs best, laughs last.
That argument wasn't between people who thought GPTs were really smart versus those who thought they were flawed. It was between people who thought everyone understood the flaws and those who thought much of the public mistook GPTs for being really smart. The "pro-AI big brains" who thought everyone understood the flaws have been demonstrated to be less informed.
Yep! I had pro-AI people give me the same patient explanations! Obviously, plenty of people are unaware of the short comings of ChatGPT or AI.
Looks like ChatGPT isn't going to replace a good law librarian anytime soon. It might create more work for them if a lawyer asks ChatGPT to write the brief an then asks the librarian to check if the citations exist and are even remotely similar to what ChatGPT claims.
No-one ever said that. It was repeatedly pointed out that _only complete idiots_ thought otherwise, or acted as if they did. The prevalence of complete idiots has never been debated; it would be absurd to do so on this blog, as you demonstrate.
It is a logical corollary of the existence of idiots that they do stupid things. This was entirely expected. It's just not significant. Idiots have always found ways to get themselves fired, or sanctioned, or jailed.
Do note what has actually happened in the main case referred to: someone who somehow qualified as a lawyer not only submitted a brief written by Chat GPT that contained citations that are not real, but when challenged on this doubled-down by getting Chat GPT to fake the judgments, as if this somehow avoided the simple checks on whether these cases had ever existed. This is a really special kind of stupid, but even so they must have realised that they were asking Chat GPT to create works of fiction. They were so stupid they thought they could trick the courts with fiction, rather than thinking they weren't fictions.
You guys are quibbling over details. ChatGPT has probably achieved a better understanding of the law than the judge. The judge is probably scared that he is going to be replaced by an AI chatbot. Even if the cited cases are fake, they could have been better applications of the law than the real cases. Get over it. We need more AI, not less.
If ChatGPT indeed had a better understanding of the law, it would have provided real citations, because those are what "the law" is.
Given how ChatGPT is supposed to learn and work, it has no understanding whatsoever, just as the filing cabinets in Searle's Chinese Room have no understanding.
Do judges have any real understanding? AI now has a superior intelligence. Case law is just an incoherent mess, to such a superior intelligence. We need AI to clean it up. If it has to redefine some cases, so much the better.
Based on the empirical evidence, which concerns itself with results, the intelligence is simply not yet superior.
If you worship AI, its superiority is as non-falsifiable as any other religious belief. But if you don’t, then ChatGPT’s inability to identify when it is spouting fact and when fiction is pretty good evidence it doesn’t have superior intelligence in any meaningful or useful sense.
It was obvious that if a Court was receiving a filed brief written by senior attorneys with this nonsense in it, that there are 100s of law firms where this is happening internally. It's pure comedy.
(Unless you're the client, of course.)
in 12th grade (Yes, "Reverend" Sandusky, in Klinger, Alabama)
I "Plagiarized" a Hemingway story ("Today is Friday")
in my defense didn't know what "Plagiarism" was, (back then we called it "Copying")
got a "D Minus", Teacher (Yes, "Reverend" a Klinger University of Alabama grad) said it was sacrilegious and written at a middle school level, yes, she gave Hemingway a D minus)
what was I gonna do, "You gave Hemingway a D minus!?!?!?!?"
Frank "Drinks to make others interesting"
Your teacher may have known exactly what it was. But then again, maybe not.
I think for filings made by lawyers, judges need to come down really hard, and make it crystal clear that both judges and the other party is going to be looking out for this and if you are caught doing it, the consequences are going to be very, very heavy not just for your client but for you.
Judges should consider suggesting in their orders that clients sue for legal malpractice.
One issue is, of course, the possible legal malpractice. I'm also wondering about out-and-out fraud. Let's say it would take you or me about 10 hours to write a real document for the court. Research, outlining the legal arguments, doing the actual writing, and checking cites (etc etc). But it actually took an hour to do it via Chatbot. If I were the judge; I'd also want to look at the unethical lawyer's billable hours. Did he put on his timesheets that he spent 10 hours on it? (ie, clear fraud) Or, did he at least have the honesty to correctly bill? (I tend to doubt it . . . if only because billing just one hour--on an assignment that takes all other attorneys about 10 hours--is a glaring red flag to his law firm that something hinky is going on.)
A judge may end up being relatively cool about this crapfest and not push it too much. But I think this has the potential to end at least one person's legal career.
While I have been out of practice a couple years, I seem to remember a plug-in for Westlaw that would take your brief and make sure that all of the citations were correct (who hasn't transposed a number once or twice in their life). Such a plug-in would catch nonexistent cases, I would think.