The Volokh Conspiracy
Mostly law professors | Sometimes contrarian | Often libertarian | Always independent
More Than a Dozen Judges "Have Released Official Guidance on Using AI Tools in Litigation"
Jessiah Hulle (Gentry Locke) provides the data (mostly federal, but also noting one state judge's order and a Canadian court's order), and summarizes the different approaches. A brief excerpt:
Federal courts nationwide are weighing in on how artificial intelligence can be used in court filings, and they're exploring different approaches to address issues such as disclosure, accuracy, and ethical duties.
A comprehensive review of 196 federal court websites reveals that judges continue to release AI orders at a steady pace…. These new orders also reveal a notable trend: Most courts personalize AI mandates rather than adopt guidelines verbatim from colleagues.
Editor's Note: We invite comments and request that they be civil and on-topic. We do not moderate or assume any responsibility for comments, which are owned by the readers who post them. Comments do not represent the views of Reason.com or Reason Foundation. We reserve the right to delete any comment for any reason at any time. Comments may only be edited within 5 minutes of posting. Report abuses.
Please
to post comments
"Most courts personalize AI mandates rather than adopt guidelines verbatim from colleagues."
This is one of my absolute legal pet peeves and AI is just the latest example. The accumulation of court-specific, or even worse judge-specific, rules and standing orders are a real nuisance. I can't even imagine how much client money is wasted each year on researching local rules/orders because each court is a bespoke unicorn and the way you practice in front of it must be unique.
It's even worse in bankruptcy-- ideally, you'd like to get all your first day paperwork in order and file it right after the petition, but you don't actually know what judge you get until you file. If it's a jurisdiction with judge specific rules, then you literally can't get squared away in advance. It's complete madness.
"Researching"? They're posted right on the court's website, and they don't change on a week-to-week basis or anything, so any experienced practitioner learns them quickly.
And in any case, it's an infinitesimal fraction of the client money spent researching substantive law, which of course varies from jurisdiction to jurisdiction. Or researching the judge's individual tendencies, which are often much more important to know about than any formal individual rules.
Judges shouldn't care how a thing is prepared, just read it and rule.
If some idiot lawyer gets it wrong, sanction the lawyer.
It doesn't matter if he let AI, or a clerk, or a random bum on a park bench write it, it's on him.
The idea is to head these problems off before they arise, not to punish lawyers afterwards. You know how much time and money was wasted on dealing with the ChatGPT lawyer situation earlier this year?
Nobody knew what they were doing. The rule is settling down: “Like monkeys banging on a typewriter, you can use Chat GPT, but must vet its writings yourself.”
This is the correct answer.
We have see this sort of panic play out before in a bunch of different fields as tech advances...
Math professors around the world threw hissy fits as slide rulers gave way to calculators which gave way to computers which themselves have have become much smaller and more advanced.
The right answer isn't to make more rules and fight it; rather it is to sanction those who put their name on bad filings regardless of the sources: Be that personally by the attorney, interns, students, personal law library, virtual law library, rote memory, AI, etc. should not matter to a judge.
Even it it somehow does, it is a losing battle.
Right now, most AI is "actively" done (that is you go to someplace like ChatGPT and enter text to get a result) and software that is using it are quick to market that they do. But as AI become more common and losses much of its marketing luster, and as it becomes more embedded into things, the passive nature of it may make disclosures much more difficult.
Take word processers as an example: started with simple spell check, then they started grammar checking, and then began offering syntax corrections. Now, we are starting to see companies adding AI to help with things like consistency of voice, changing the way entire documents are written to appeal to different audiences, making content suggestions, etc. I would guess it is only a matter to time before we see them writing and correcting legalese. And it make not always be obvious if it is simple coding, advanced machine learning, or AI.
ChatGPT, give some arguments against using AI in court pleadings.
Using AI in court pleadings, like any technology, has its share of drawbacks and concerns. Here are some arguments against using AI in this context:
Lack of Context and Human Judgment: AI systems lack the ability to fully understand the nuances, emotions, and context of legal cases. They may generate pleadings based solely on data and patterns, overlooking unique aspects that a human attorney might consider.
Ethical and Privacy Concerns: AI systems often rely on vast amounts of data, raising privacy concerns, especially when dealing with sensitive legal information. Protecting the confidentiality and privacy of court-related documents can be challenging.
Limited Legal Knowledge: AI systems can provide legal information and precedent but may not fully grasp the subtleties of evolving legal principles, changing interpretations of the law, or jurisdiction-specific nuances, which human lawyers can navigate more effectively.
Bias and Fairness Issues: AI systems can inherit biases present in the training data, potentially leading to biased recommendations or legal strategies. This could disproportionately impact marginalized communities and create unjust outcomes.
Reduced Human Accountability: Relying on AI for legal work may diminish human accountability. If errors or ethical violations occur, it may be challenging to attribute responsibility, leading to potential accountability gaps.
Job Displacement: Widespread adoption of AI in legal processes could lead to a reduced demand for human legal professionals, potentially resulting in job displacement within the legal industry.
Vulnerability to Hacking and Manipulation: AI systems used in court proceedings may be vulnerable to hacking or manipulation by malicious actors, compromising the integrity of the legal process.
Loss of Legal Skills and Expertise: Over-reliance on AI could lead to a degradation of legal research, writing, and critical thinking skills among human lawyers, diminishing the quality of legal practice over time.
Complexity and Cost: Implementing AI systems in court proceedings can be expensive, requiring substantial financial resources for development, maintenance, and training. Smaller legal organizations and individuals may struggle to access or afford such technology.
Resistance to Change: The legal profession has a tradition of conservatism, and many legal professionals may be resistant to adopting AI in court pleadings, potentially hindering its widespread acceptance and effectiveness.
Challenges with Interpretation: AI-generated documents may be subject to interpretation and challenge by opposing parties and judges, leading to additional legal disputes over the accuracy and validity of AI-generated content.
In summary, while AI has the potential to improve efficiency and access to legal services, there are legitimate concerns and limitations associated with its use in court pleadings, ranging from ethical issues to potential biases and a lack of human judgment and expertise. It's crucial to strike a balance between the advantages and disadvantages when considering the adoption of AI in the legal field.
That is a great illustration as to why one should not use ChatGPT to write a closing argument. None of those particular points are of any real concern and the limitations are all either overcome by better inquiry strings or will resolve themselves as AI becomes smarter and (considering it really is AI) gains better understanding.
If only "more than a dozen judges" have such guidelines, I feel like the sample size is awfully small for the observation, "Most courts personalize AI mandates rather than adopt guidelines verbatim from colleagues," to have much value.