The Volokh Conspiracy
Mostly law professors | Sometimes contrarian | Often libertarian | Always independent
Don't Give Me That ChatGPT-4 Nonsense, Judge Says
From Judge Paul Engelmayer's order Thursday in J.G. v. N.Y. City Dep't of Ed. (S.D.N.Y.), deciding on attorney fees to be awarded under the Individuals with Disabilities Education Act:
The Cuddy Law Firm also states that its requested hourly rates are supported by feedback it received from the artificial intelligence tool "ChatGPT-4."
In fairness, the Cuddy Law Firm does not predominantly rely on ChatGPT-4 in advocating for these billing rates. It instead presents ChatGPT-4 as a "cross-check" supporting the problematic sources above. As such, the Court need not dwell at length on this point.
It suffices to say that the Cuddy Law Firm's invocation of ChatGPT as support for its aggressive fee bid is utterly and unusually unpersuasive. As the firm should have appreciated, treating ChatGPT's conclusions as a useful gauge of the reasonable billing rate for the work of a lawyer with a particular background carrying out a bespoke assignment for a client in a niche practice area was misbegotten at the jump.
In two recent cases, courts in the Second Circuit have reproved counsel for relying on ChatGPT, where ChatGPT proved unable to distinguish between real and fictitious case citations. In Mata v. Avianca, Inc., Judge Castel sanctioned lawyers who "abandoned their responsibilities when they submitted non-existent judicial opinions with fake quotes and citations created by the artificial intelligence tool ChatGPT." And in Park v. Kim, the Second Circuit referred an attorney to the Circuit's Grievance Panel for further investigation after finding that her brief relied on "non-existent authority" generated by ChatGPT.
In claiming here that ChatGPT supports the fee award it urges, the Cuddy Law Firm does not identify the inputs on which ChatGPT relied. It does not reveal whether any of these were similarly imaginary. It does not reveal whether ChatGPT anywhere considered a very real and relevant data point: the uniform bloc of precedent, canvassed below, in which courts in this District and Circuit have rejected as excessive the billing rates the Cuddy Law Firm urges for its timekeepers.
The Court therefore rejects out of hand ChatGPT's conclusions as to the appropriate billing rates here. Barring a paradigm shift in the reliability of this tool, the Cuddy Law Firm is well advised to excise references to ChatGPT from future fee applications.
Editor's Note: We invite comments and request that they be civil and on-topic. We do not moderate or assume any responsibility for comments, which are owned by the readers who post them. Comments do not represent the views of Reason.com or Reason Foundation. We reserve the right to delete any comment for any reason at any time. Comments may only be edited within 5 minutes of posting. Report abuses.
Please
to post comments
But what did the judge REALLY think about ChatGPI?
This is just brainless resistance to new technology.
The judge seems to be saying, "In Winter/Spring of 2024, there's no way one should rely on Chat in legal documents submitted to courts. If the technology improves an unexpectedly large amount, and becomes actually reliable, then it may become a legitimate source."
Can you explain why this, in your mind, equates to "brainless resistance?" Because it sounds like you are arguing that, as it stands right now; it's actually okay and reasonable to rely on Chat in this way. I think you're way out on a limb, alone, if that's really your position.
Don't feed the troll.
Just from what I recall of the overall texture of Roger’s past posts, I’m going to go way out on a limb and say you missed the implied /s.
The brief was not rely on Chat for billing rates. The Chat was just a supporting argument. The judge says, "Barring a paradigm shift in the reliability of this tool". Ha, ha. I suggest that the judge stop using the phrase "paradigm shift" in his opinions.
The judge was right. The essential thing is the prompts that the AI was given. Sufficiently slick prompts can elicit almost anything. So the fact that the lawyers didn't supply the prompts mean that they are hiding things.
By the way EV, my position on AI libel and defamation is the same. The AI only responds to prompts. If the user tricks the AI into saying something libelous, it is the user's speech, not the AI's. The user should be the defendant in a libel suit.
Even assuming ChatGPT was effective, why would it justify a higher fee? They're admitting they used a free program to check their work instead of having an attorney or other employee do it. If I rely on a proof-reading software instead of having a coworker proof-read for me, the end result may be the same. But we've spent less time as a firm on it. You'd have to do a lot of extra justifying to say that entitles you to MORE money.
They're not saying they should get a higher rate because they used ChatGPT. They're saying something like, they asked ChatGPT what the rate should be and it agreed with them.
The judge rightly blasts them for this. Even without sneaky prompts LLMs are manipulable; I've had these things quote stuff I wrote elsewhere on the web back to me.
In strands Game, survival is paramount. Players must gather resources, build shelters, and fend off dangerous creatures to stay alive.