The Volokh Conspiracy
Mostly law professors | Sometimes contrarian | Often libertarian | Always independent
Kenosha County (Wisconsin) DA Sanctioned for AI Hallucinations
Wisconsin Public Radio (Sarah Lehr) reported Monday:
A judge has sanctioned Kenosha County District Attorney Xavier Solis over his use of artificial intelligence in court filings.
Circuit Court Judge David Hughes called out Solis on Friday for using AI in a response to a defense attorney's request to have a burglary case dismissed [without disclosing this, as Kenosha County court policy required].
Hughes also blasted Solis for using "hallucinated and false citations," online court records show….
The judge dismissed the case, but defense lawyer Michael Cicchini was quoted as saying:
The judge actually granted my motion to dismiss on substantive grounds. In other words, the judge found that there was not probable cause that the defendant committed a crime. His ruling was based on the evidence the state presented at the preliminary hearing that was held about two years ago, under the previous district attorney administration.
Several years ago, Solis had been involved as an attorney in a dispute over the return of the bail funds in the Kyle Rittenhouse case.
Editor's Note: We invite comments and request that they be civil and on-topic. We do not moderate or assume any responsibility for comments, which are owned by the readers who post them. Comments do not represent the views of Reason.com or Reason Foundation. We reserve the right to delete any comment for any reason at any time. Comments may only be edited within 5 minutes of posting. Report abuses.
Please to post comments
[deleted]
EV, once again you have failed. I have asked more than once for cites detailing the name(s) of the AI LLM being used and you continue to FAIL and FAIL and FAIL to provide cites. Just as not all law schools are equal, not all lawyers are equal not all AI LLMs are equal. Several LLMs have passed simulated bar exams starting in 2023. I have always respected your positions in the past and have to ask why you seem so far behind the curve in keeping up the shockingly fast progress AI is making. As important as cites for the LLMs being used is, it is equally important to know the prompts being used. I have recently got interested in AI and my guiding principle is one I learned one the first day of class in the first law school class I had. The professor said something along the lines of 'I don't expect you to turn in anything until you have edited it at least ten times'. I tend to come up with a rough prompt of what I want and then feed it to the AI with instructions to 'help me write a prompt to do this'. While I don't always go through ten iterations it is never a one step process. So once again I am asking for cites about the name of the LLM and the prompt used to create the submission to the judge.
Wow, that's an angry comment about irrelevant goalposts.
Does the specific AI model affect whether the DA did due diligence and editing when drafting the filings, or adequately disclosed the use of AI in drafting them? The quality of the prompts might affect the pre-editing frequency of errors, but the attorney is still responsible for the accuracy of the final submission.
Did you miss the part of my "angry comment about irrelevant goalposts" where I mentioned an irritative process of editing ten times. I have no problem with the judge slapping the attorney to sleep and then slapping him for going to sleep. The thing is if you are using an online free AI things like the ability to expand or limit hallucinated responses are out of your control. If you are running a local LLM, or some of the expensive to play on ones, you have control over how much the LLM hallucinates, as well as a host of other things.
As my OP noted this is not the first I have asked which LLM and what prompt. As an aside I noticed lots of posts in today's open thread dealing with AI and sad to say the level of ignorance is shockingly bad.
1) Have you considered that it's entirely possible that EV doesn't have the information you're after? Or that perhaps the only person that does at the DA?
2) Hallucination is a structural effect of transformer inference and I suspect very strongly that anything you can run on your 3060 is sparse enough that no other adjustment is going to make it more reliable than a competent frontier or commercial model.
Kenosha County court policy requires disclosure of the use of AI? Crazy policy, as probably every single brief uses AI to some extent. Eg, every google search uses AI.