The Volokh Conspiracy
Mostly law professors | Sometimes contrarian | Often libertarian | Always independent
When AI poses an existential risk to your law license
Episode 459 of the Cyberlaw Podcast
This episode of the Cyberlaw Podcast features the second half of my interview with Paul Stephan, author of The World Crisis and International Law. But it begins the way many recent episodes have begun, with the latest AI news. And, since the story is squarely in scope for a cyberlaw podcast, we devote some time to the so-appalling-you-have-to-laugh-to-keep-from-crying story of the lawyer who relied on ChatGPT to write his brief. As Eugene Volokh noted in his post on the story, the AI returned exactly the case law the lawyer wanted – because it made up the cases, the citations, and even the quotes. The lawyer said he had no idea that AI would do such a thing.
I cast a skeptical eye on that excuse, since when challenged by the court to produce the cases he relied on, the lawyer turned not to Lexis-Nexis or Westlaw but to ChatGPT, which this time made up eight cases on point. And when the lawyer asked ChatGPT, "Are the other cases you provided fake," the model denied it. Well, all right then. Who among us has not asked Westlaw, "Are the cases you provided fake?" and accepted the answer without checking? Somehow, I can't help suspecting that the lawyer's claim to be an innocent victim of ChatGPT is going to get a closer look before this story ends. So if you're wondering whether AI poses existential risk, the answer for at least one law license is almost certainly "yes."
But the bigger stories of the week were the cries from Google and Microsoft leadership for government regulation of their new AI tools. Microsoft's President, Brad Smith has, as usual, written a thoughtful policy paper on what AI regulation might look like. Jeffery Atik and Richard Stiennon point out that, as usual, Brad Smith is advocating for a process that Microsoft could master pretty easily. Google's Sundar Pichai also joins the "regulate me" party, but a bit half-heartedly. I argue that the best measure of Silicon Valley's confidence in the accuracy of AI is easy to find: Just ask when Google and Apple will let their AI models identify photos of gorillas. Because if there's anything close to an extinction event for those companies it would be rolling out an AI that once again fails to differentiate between people and apes.
Moving from policy to tech, Richard and I talk about Google's integration of AI into search; I see some glimmer of explainability and accuracy in Google's willingness to provide citations (real ones, I presume) for its answers. And on the same topic, the National Academy of Sciences has posted research suggesting that explainability might not be quite as impossible as researchers once thought.
Jeffery takes us through the latest chapters in the U.S. - China decoupling story. China has retaliated, surprisingly weakly, for U.S. moves to cut off high-end chip sales to China. It has banned sales of U.S. - based Micron memory chips to critical infrastructure companies. In the long run, the chip wars may be the disaster that Invidia's CEO foresees. Certainly, Jeffery and I agree, Invidia has much to fear from a Chinese effort to build a national champion in AI chipmaking. Meanwhile, the Biden administration is building a new model for international agreements in an age of decoupling and industrial policy. Whether the effort to build a China-free IT supply chain will succeed is an open question, but we agree that it marks an end to the old free-trade agreements rejected by both former President Trump and President Biden.
China, meanwhile, is overplaying its hand in Africa. Richard notes reports that Chinese hackers attacked the Kenyan government when Kenya looked like it wouldn't be able to repay China's infrastructure loans. As Richard points out, lending money to a friend rarely works out. You are likely to lose both the money and the friend, even if you don't hack him.
Finally, Richard and Jeffery both opine on Ireland's imposing – under protest – a $1.3bn fine on Facebook for sending data to the United States despite the Court of Justice of the European Union's (CJEU) two Schrems decisions. We agree that the order simply sets a deadline for the U.S. and the EU to close their third deal to satisfy the CJEU that U.S. law is "adequate" to protect the rights of Europeans. Speaking of which, anyone who's enjoyed my rants about the EU will want to tune in for a June 15 Teleforum in which Max Schrems and I will debate the latest privacy framework. If we can, we'll release it as a bonus episode of this podcast, but listening live should be even more fun!
You can subscribe to The Cyberlaw Podcast using iTunes, Google Play, Spotify, Pocket Casts, or our RSS feed. As always, The Cyberlaw Podcast is open to feedback. Be sure to engage with @stewartbaker on Twitter. Send your questions, comments, and suggestions for topics or interviewees to CyberlawPodcast@gmail.com. Remember: If your suggested guest appears on the show, we will send you a highly coveted Cyberlaw Podcast mug! The views expressed in this podcast are those of the speakers and do not reflect the opinions of their institutions, clients, friends, families, or pets.
Editor's Note: We invite comments and request that they be civil and on-topic. We do not moderate or assume any responsibility for comments, which are owned by the readers who post them. Comments do not represent the views of Reason.com or Reason Foundation. We reserve the right to delete any comment for any reason at any time. Comments may only be edited within 5 minutes of posting. Report abuses.
Please
to post comments
Well let's face it there is a certain class of lawyers that are specially attuned to hear the alluring call of ChatGPT, like lemmings hearing the call of the sea.
“… when challenged by the court to produce the cases he relied on, the lawyer turned not to Lexis-Nexis or Westlaw but to ChatGPT, which this time made up eight cases on point. And when the lawyer asked ChatGPT, “Are the other cases you provided fake,” the model denied it. Well, all right then. Who among us has not asked Westlaw, “Are the cases you provided fake?” and accepted the answer without checking?”
That is with 99.999% certainty not what happened.
In the comment thread for the linked Volokh post there were also a remarkable number of lawyers deriding Schwartz for the attached segment of a session he had with ChatGPT, and here Stewart Baker does it on the same dumb grounds.
Note that the cases mentioned are NOT, contra Baker, any part of DoLuca’s or Schwartz’s response to detection of Schwartz's error, but were submitted (RE#29, iirc) BEFORE the judge detected that they were false.
The session is undated but my it’s-a-lock bet is that it was generated after Schwartz already knew the cites were false and he was simply supplying a sample of ChatGPT lying generated AFTER the rebuke from the judge of DoLuca, not a transcript of a question he asked ChatGPT before including the fake cites in his filing. Sheesh. What is the matter with you people?
You've misunderstood the sequence of events. Schwartz was in trouble long before filing #29. Schwartz cited the non-existent cases in filing #21 (Mar. 1). Defendant notified the court in filing #24 (Mar. 15) that it could not locate those cases and that the citations went to unrelated cases. In filing #25 (Apr. 11), the judge ordered Schwartz to provide copies of the opinions, and he was clearly suspicious by then, because he said the plaintiff would otherwise face dismissal. In filing #27, the court directed Schwartz also to attach a copy of Zicherman, a case supposedly cited by one of his other cases.
Schwartz attached the opinions that ChatGPT generated in filing #29 (Apr. 25), pursuant to the court's April 11 order, which was issued because of defendant's suspicions. Schwartz apparently did not attempt to verify the existence of these opinions through Lexis or Westlaw. He also admitted in his April 25 affidavit that he couldn't find Zicherman.
Thanks. That clarifies things a lot. and you are correct that I misunderstood the sequence of events. But I reiterate that I think Adler got it wrong too, when he writes, “And when the lawyer asked ChatGPT, “Are the other cases you provided fake[?],” the model denied it.” THAT exchange is in exhibit #32 (May 25, 2023) and is a challenge to ChatGPT about already-known fakery, not something being presented to the court as evidence that the cases were not fake, which is clearly Adler’s implication as shown by his rebuke immediately following..
Or am I still getting egg on my face somehow?
I also should look at the other thread to see if I owe apologies there.
Well, you're confusing Baker and Adler.
Agreed. But that leaves my point unrebutted, so I am unconcerned.
Yeah, #29, dated April 25. #30 is opposing counsel’s letter notifying the judge of the fakery, dated the next day.https://www.courtlistener.com/docket/63107798/mata-v-avianca-inc/
I understated my certainty that Baker is getting this wrong. I should have said 100%.
Baker attempts to link to the Volokh post twice, but the “post on the story” link goes to an only tangentially related article.
Yesterday in response to a news article, I questioned Bard about an EU individual who had been on the FBI most wanted list for many years and Bard responded that said person was caught, convicted and sentenced in the US some years ago offering dates. Completely false. I hope more info becomes available about these fabrications come about. I have been following this story for many years and there is no public record of the person entering the US nor any interaction with the US justice system.
I thought places like Crivella West obviated any need to use AI.
IF I am right, then the problem is 'culpable ignorance". I am not affiliated in any way with that company and use them as an example
I don’t know whether the use of “existential” to modify “risk” or “threat” ever serves a purpose, but it sure doesn’t in this headline.
It certainly served the purpose, albeit without apparent effect, of warning the US not to extend NATO membership to the Ukraine.
To put this in Stewart Baker's terms, the ChatGPT lawyer asking for case authority is analogous to the Bush administration asking the intelligence community for information to support war with Iraq. They generated exactly what was needed, even though it was entirely false.
Adler starts off with using the term "elites" before migrating to "meritocracy" and "the knowledge economy", and I wish to offer a brief demurral, These so-called "elites" are not elite in any discernible way except in their ability to exercise power. Perhaps use "overclass"?
AdlerBaker