The Volokh Conspiracy
Mostly law professors | Sometimes contrarian | Often libertarian | Always independent
N.D. Texas Bankruptcy Court (Not Just a Single Judge) Issues Order Related to Use of AI-Generated Filings
From In re: Pleadings Using Generative Artificial Intelligence, Gen. Order No. 2023-03, issued Wednesday by Chief Judge Stacey G. C. Jernigan:
If any portion of a pleading or other paper filed on the Court's docket has been drafted utilizing generative artificial intelligence, including but not limited to ChatGPT, Harvey.AI, or Google Bard, the Court requires that all attorneys and pro se litigants filing such pleadings or other papers verify that any language that was generated was checked for accuracy, using print reporters, traditional legal databases, or other reliable means. Artificial intelligence systems hold no allegiance to any client, the rule of law, or the laws and Constitution of the United States and are likewise not factually or legally trustworthy sources without human verification. Failure to heed these instructions may subject attorneys or pro se litigants to sanctions pursuant to Federal Rule of Bankruptcy Procedure 9011.
Thanks to Jake Karr for the pointer.
Editor's Note: We invite comments and request that they be civil and on-topic. We do not moderate or assume any responsibility for comments, which are owned by the readers who post them. Comments do not represent the views of Reason.com or Reason Foundation. We reserve the right to delete any comment for any reason at any time. Comments may only be edited within 5 minutes of posting. Report abuses.
Please
to post comments
Neither print reporters nor traditional legal databases hold "allegiance to any client, the rule of law, or the laws and Constitution of the United States and are likewise not factually or legally trustworthy sources without human verification".
Its odd language.
Hmmm. Without getting into a debate on the quality of reporting, it's odd language indeed to claim print reporters are either not human, or not able to perform an act of verification.
And the judge's point is that data in references like traditional legal databases do, in fact, undergo human verification, while the large language models of of the AI chatbots cited, do not.
They can be better than humans if trained on the proper dataset.
The output from LLMs like ChatGPT are so eerily human like that people don't comprehend what's really happening and why the results are the way they are.
That leads to poor decisions like using generated text in legal briefings.
This will simply lead to even better AI drafted legal text in short order especially when services arrive that create using one or two AI and then fact and legal check it with others. This will be very interesting when consumers of AI content are able to query multiple AI simultaneously that are all cross checking each other.
with regards to the law profession it certainly seems like it will take the drudgery assigned to clerks away or minimize it and in higher risk fields having an assortment of AI crosscheck even human findings will pay benefits