The Volokh Conspiracy
Mostly law professors | Sometimes contrarian | Often libertarian | Always independent
From Judge Michael J. Newman (S.D. Ohio) on Use of AI to Prepare Filings
From today's order in Whaley v. Experian Info. Solutions, Inc., which dismisses the case on the merits but also adds:
Plaintiff admits that he used Artificial Intelligence ("AI") to prepare case filings. [This yielded hallucinated citations to nonexistent cases. -EV] The Court reminds all parties that they are not allowed to use AI—for any purpose—to prepare any filings in the instant case or any case before the undersigned. See Judge Newman's Civil Standing Order at VI. Both parties, and their respective counsel, have an obligation to immediately inform the Court if they discover that a party has used AI to prepare any filing. The penalty for violating this provision includes, inter alia, striking the pleading from the record, the imposition of economic sanctions or contempt, and dismissal of the lawsuit.
Not all judges take this view, though I'm sure all judges would insist (at least) that if AI software is used to prepare filings, the output be carefully checked by the lawyer or litigant.
Note also that this came in the context of AI being used to actually write a brief, rather than being a tool used for finding material. The Judge's cited Standing Order makes clear that AI search engines may be used:
No attorney for a party, or a pro se party, may use Artificial Intelligence ("AI") in the preparation of any filing submitted to the Court. Parties and their counsel who violate this AI ban may face sanctions including, inter alia, striking the pleading from the record, the imposition of economic sanctions or contempt, and dismissal of the lawsuit. The Court does not intend this AI ban to apply to information gathered from legal search engines, such as Westlaw or LexisNexis, or Internet search engines, such as Google or Bing. All parties and their counsel have a duty to immediately inform the Court if they discover the use of AI in any document filed in their case.
Editor's Note: We invite comments and request that they be civil and on-topic. We do not moderate or assume any responsibility for comments, which are owned by the readers who post them. Comments do not represent the views of Reason.com or Reason Foundation. We reserve the right to delete any comment for any reason at any time. Comments may only be edited within 5 minutes of posting. Report abuses.
Please
to post comments
ChatGPT is also dangerous for science. It makes up research studies that don't exist. We had a bad enough problem with lousy quality research, particularly during COVID, and now we have a problem with nonexistent research used to promote an agenda.
Are judges allowed to make rules like that? I know judges tend have latitude in how business is conducted in their court, but I can't see that that extends to allowing them to prohibit what a party deems the best way to research, and craft their argument. Who cares if AI was used if it was properly checked and edited?
"Are judges allowed to make rules like that? "
This actually, in my view, calls for his removal from the bench. First of all, the order is overbroad--Grammarly uses AI. Second, as you note, so long as the filing otherwise meets the standards of a proper filing, who cares how it came to be?
Yet another reason to loathe the judiciary in this country.
"Grammarly uses AI" -- perhaps the increased use of such tools will improve legal writing. Which is pretty abominable, for the most part.
What many lawyers need to learn is that "less is more" is not just for architecture, but usually applies to legal briefs as well.
Agreed. Judges have no business telling citizens how to do their legal research.
Banning "use" of AI is overbroad. It could be a useful tool, but of course it must be checked thoroughly.
I generally use Westlaw, but on rare occasions when stumped I try Google. Not that it's reliable, but it does sometimes point me to good authority. Which of course I run to ground and check.
Westlaw uses AI, lol.
Wow, how clever of you.
Westlaw may, doesn't mean I do. I use basic word searches. And occasionally the headnote codes. None of which are based on AI.
They do have a suggestion function which relies on AI, but I tried it and found it pretty useless.
Not to mention that I always read the cases, and decide whether they are relevant. And then quote them myself in a brief, if I think they are helpful.
So your snark is pretty useless. Maybe it was generated by AI. 😉
I wasn't trying to be snarky--just pointing out that using Westlaw even might run afoul of the judge's order.
And of course Google and Bing searches use AI. Anyone doing searches is probably using AI.
Blanket AI prohibitions are flatly stupid. Existing doctrines and ethical rules very easily accommodate AI. Lawyers are responsible for what they file. It doesn't matter whether a paralegal, associate, partner, AI, or a cat prepares it. If a lawyer files Rule 11 trash, then sanction the lawyer.
Amen. Too much Luddite fear of AI, and I worry more about restrictions in academia, where students who learn to manage AI virtual assistants will be more productive (and marketable) than those shielded from temptations of abuse.
Disturbing how many lawyers will rely on a case in a brief without even taking a momentary look at the case. Whether you get a cite from AI, some website, your Uncle Phil, or anywhere else -- spend at least 60 seconds to see if the case name and cite are correct. Better still, read the case or at least the relevant part of it.