The Volokh Conspiracy
Mostly law professors | Sometimes contrarian | Often libertarian | Always independent
5th Circuit Seeks Comment on Proposed AI Rule
Lawyers will have to certify they did not use AI, or verify any work produced by AI.
The U.S. Court of Appeals for the Fifth Circuit is soliciting comments on a proposed change. Rule 32.3 would now restrict the use of generative AI (with changes in red):
32.3. Certificate of Compliance. See Form 6 in the Appendix of Forms to the Fed. R. App. P. Additionally, counsel and unrepresented filers must further certify that no generative artificial intelligence program was used in drafting the document presented for filing, or to the extent such a program was used, all generated text, including all citations and legal analysis, has been reviewed for accuracy and approved by a human. A material misrepresentation in the certificate of compliance may result in striking the document and sanctions against the person signing the document.
The new certificate of compliance would require the lawyer to check one of two boxes:
3. This document complies with the AI usage reporting requirement of 5th Cir. R. 32.3 because:
- no generative artificial intelligence program was used in the drafting of this document, or
- a generative artificial intelligence program was used in the drafting of this document and all generated text, including all citations and legal analysis, has been reviewed for accuracy and approved by a human.
I think this proposal strikes a good balance. Lawyers are not barred from using generative AI, but they have to attest that they used this technology. And you can be certain that briefs with the AI box checked will be reviewed more carefully. Indeed, I would be eager to see an empirical study performed about the briefs that check the second box. Clients may also want to see this information--they may have thoughts about lawyers who bill their time to use generative AI.
Editor's Note: We invite comments and request that they be civil and on-topic. We do not moderate or assume any responsibility for comments, which are owned by the readers who post them. Comments do not represent the views of Reason.com or Reason Foundation. We reserve the right to delete any comment for any reason at any time. Comments may only be edited within 5 minutes of posting. Report abuses.
Please
to post comments
Oooh, that comma!
Agreed, but forget the comma, for the time will come when lawyers will have to certify that they DID use AI to perfect their briefs!
Why is this necessary but the use of google is not? A bad source is a bad source no matter where it comes from and a lawyer who doesn't check sources deserves to get smacked around.
This attestation is a pointless, petty bureaucratic answer to the wrong problem. Bad lawyering is already sanctionable. Courts are in this mess only because they have refused to enforce the rules already available to them.
"Why is this necessary but the use of google is not?"
It's the same reason they don't put seatbelts on tricycles, but they do on cars. Google does not write briefs, nor does it generate fake citations.
Google has been around a lot longer than AI, and yet no one ever heard of fake citations generated by Google. Whereas some of the examples of AI are blatant to the point of being satire. Wasn't there a post recently where the Table of Authorities were compared, and 2/3 of the AI cases were fake?
Again: the idea is to head off the problem before it arises, not punish those who fall into the trap.
I don't see a problem billing for time spent using AI: (1) theoretically it should reduce the billable hours spent in preparing legal documents, and (2) the output will only be as good as the skill of the lawyer in querying the AI bot. Lawyers who produce shoddy work because they are inept in using AI should soon find clients looking elsewhere for legal representation.
"theoretically it should reduce the billable hours spent in preparing legal documents"
Except with the high rate of errors, I wonder if, given the checking necessary to make a good work product, results in a savings of time.
That's why I don't use the auto table of authority generators that are out there. Every time I've tried I've spent more time double checking it and correcting its answers than I would have writing them out myself.
As a lawyer Iām embarrassed that there is even discussion about the necessity for such a rule.
I wonder if other professions are so irresponsible so as to use AI in writing their work.
Yup.
Sports Illustrated Published Articles by Fake, AI-Generated Writers
https://futurism.com/sports-illustrated-ai-generated-writers
The problem is more widespread than one might think. Employers are requiring employees to not use AI in creating work product as a part of Code of Conduct policies.
Sports Illustrated has become a zombie publication, resembling Newsweek. We should probably expect the losers and misfits who write for Newsweek these days to start showing up at Sports Illustrated.
If youāre an old timer like me, you remember back when Sports Illustrated started swimsuit editions. Hell, well before I could (properly) appreciate the issue itself, I developed a fondness for the letters in following editions by enraged people canceling their subscription because women in bikinis had nothing to do with sports.
A valid enough point as far as it goes. Of course, these days the swimsuit issue pretty much is the point. I guess it helps if you take an expansive view of the Athletic Endeavor.
Some years ago SI asked subscribersā wives if they let their husbands look at that issue. One said, āYes, but only after I tape my face onto the faces of all the models.ā
This isn't going to work for very long, because generative AI is being incorporated into productivity tools. Pretty soon, if you're using Google Docs (or Microsoft Word or whatever else), you might be using generative AI without realizing it.
It'd be better to simply attest that the final product was reviewed for accuracy by a human... whatever tools were used in its production.
What āproductivity toolsā?
I have noticed that WORD sometimes will make suggestions for a word or phrase will I am typing. Usually its the same words as I have already used, but sometimes its common legal phrases.
That's a far cry from having AI generate a whole brief, though.
But would it count as being "used in the drafting?"
Why not? WORD suggests a phrase I put into a brief, and if I accept, then yes, it was used in the drafting.
I don't think Word counts as generative AI.
It does count though as another way to get in the way of using our brains.
Today i redrafted yet another Word brief from another attorney where I had to disable 14 or so āfeaturesā before I could write what I want the way I f**king wanted it.
Been there, had to do the same -- and I was writing academic stuff.
Open Office isn't as bad.
Randal is correct.
I've been complaining about other Judges AI orders because they don't define AI. But this order does a better job.
But Randal is correct, almost every tool we use such as word processors will soon claim to have generative AI embedded. Proving that true or false, or proving that you did or did not make use of the AI will be almost impossible.
Consider this. Phone systems have always used filters to get rid of noise and make your voice sound clearer. Tomorrow's microphones may all use AI filters, or claim in their marketing that they have AI. So how can you prove that your voice on the phone was not AI produced or AI enhanced? You can't.
The operative word is ubiquitous.
A "Human"?
How about an actual attorney?
"Your honor, my seven-year old says it's good to go".
I always have my dog review my briefs before I file them. He often says, "it's a little ruff."
So your dog....keeps you out of the Doghouse? š
Let's just say he keeps my legal arguments from barking up the wrong tree. š
So your dog is smarter than a lawyer relying on an AI generated pleading?
No wonder I like most dogs better than I like most people
That was the exact same thought I had. If the brief is filed by an attorney and AI citations are used, an attorney should check the citations. In fact, if you're looking to save money for the client, this is the exact kind of work that could be farmed out to new associates or contract attorneys.
The only reason to have "human" there is to allow AI to be used in pro se filings.
I'd be more interested to see if getting judges to pay more attention to the brief, by checking the AI used box, can be or is a benefit that would incentivize the use of AI tools.
My inclination is if the case law is on my side but the result seems unfair or unintuitive I'm checking that box to make sure the the judge, or more likely his/her clerk, really reads and digests the case law.
Not sure lying to the Court is a smart way to represent your client. Not to mention it violates an ethical rule or two.
Who says I lied. All I have to do is use AI for a small thing and I can say I used it. AFAICT it doesnāt require me to point out the places I used it. That's why I pointed out it may "incentivize" the use of AI
I can see requiring a "self filer" to verify that every cited case actually exists, but as to asking a pro se litigant who may not understand legal analysis in the first place to certify that AI's is correct strikes me as an unreasonable burden.
How the hell do you evaluate that which you don't even know yourself?
Don't use AI?
So close to getting it!
š
I was actually arguing the converse of that, but I'll extend it to licensed lawyers as well -- assuming that there is a "right" and hence a "wrong" legal analysis, how do you penalize one for not recognizing the "wrong" one if one arguably doesn't know it in the first place.
And isn't this more a matter of judgement than a binary "right" and "wrong" issue in the first place?
And you have boggled my mind once again.
Simple: If a MD is held to a higher standard of knowledge in an auto accident, why shouldn't a JD be held to a higher standard in a court filing?
Every time you accept text suggested by autocomplete you have used a "generative artificial intelligence" to help draft your document.
"Additionally, counsel and unrepresented filers must further certify that no generative artificial intelligence program was used in drafting the document presented for filing, or to the extent such a program was used, all generated text, including all citations and legal analysis, has been reviewed for accuracy and approved by a human."
I don't see the point of the first half of the sentence. The second half alone would be just fine. Counsel shouldn't have to make that distinction, any more than they should have to signify whether an associate wrote any portion of the brief, or a paralegal reviewed it or not, or they used a particular word processing program, or they used Westlaw vs. Nexis vs. Fastlaw vs. something else.
Actually....
If a student writes an academic paper, the student had damn well better point out what was written by someone else and cite it. And even then, the expectation is that the student actually wrote it.
I'd argue that the same thing ought to apply to a brief filed by a lawyer -- if he is signing it, it should attest that HE wrote it, not a paralegal or an associate (unless the latter also signs it).
Big Law would implode, but would that be a bad thing?
.
Dr. Ed is Dunning-Krugering again. The purpose of an academic paper is for the student to demonstrate his personal knowledge. The purpose of a legal brief is to advance the client's interests. Who wrote the brief is utterly unimportant. The purpose of having the attorney sign a brief is not for bragging rights; it's for holding the person who files it accountable if there's something wrong with it.
Simplify it. Require this for all submissions:
"all text, including all citations and legal analysis, has been reviewed for accuracy and approved by a human."
Is there not an equivalent to Rule 11 in the FRAP? Isn't the lawyer's signature a representation that, among other things, the legal arguments were "warranted by existing law or by a nonfrivolous argument for extending, modifying, or reversing existing law or for establishing new law." Would this not also cover filing briefs that either cite cases that don't exist or wholly miscite existing cases, which is what I understand to be the issue with AI.
Or is the 5th just gifting us with another reason for the clerk's office to bounce briefs for form violations.
As recent episodes have illustrated, the existing rules provide ample authority to impose disciplinary sanctions on lawyers who submit AI-generated citations to fictitious cases. The motivation behind this order, I take it, is to try to avoid the submission of those pleadings in the first place, rather than to facilitate after-the-fact punishment.