Court Throws Out Case After Finding Plaintiffs Submitted Deepfake Videos and Altered Images
The case is Mendones v. Cushman & Wakefield, Inc., decided Sept. 9 by Judge Victoria Kolakowski (Cal. Super. Ct. Alameda County). Here's a short excerpt; the whole opinion (including copied images and detailed analysis) is worth reading:
The Court finds that Plaintiffs violated section 128.7(b) of the Code of Civil Procedure by submitting fabricated evidence in support of their motion for summary judgment….
The Court finds that exhibits 6A and 6C are products of GenAI and do not capture the actual speech and image of Geri Haas. In other words, these exhibits are deepfakes….
While the "person" depicted in exhibits 6A and 6C bears a passing resemblance to the person depicted in exhibit 36, they are not the same person. The accent, cadence, volume, word choice, pauses, gestures, and facial expression, among other characteristics, of the person depicted in exhibit 36 are vastly different from those demonstrated by the "persons" depicted in exhibits 6A and 6C….
The court also found other alterations, and concluded:
The Court finds that a terminating sanction is appropriate. This sanction is proportional to the harm that Plaintiffs' misuse of the Court's processes has caused. A terminating sanction serves the appropriate remedial effect of denying Plaintiffs— and other litigants seeking to make use of GenAI to submit video testimonials—of the ability to further prosecute this action after violating the Court's and the Defendants' trust so egregiously.
Further, a terminating sanction serves the appropriate deterrent effect of showing the public that the Court has zero tolerance with attempting to pass deepfakes as evidence.
This sanction serves the appropriately chilling message to litigants appearing before this Court: Use GenAI in court with great caution.
The plaintiffs were self-represented.
Louisiana Court of Appeal Judge Scott Schlegel ([Sch]Legal Tech) has more, with some warnings for the future; an excerpt:
The Mendones case is a warning shot. It shows the cost of letting AI forgeries seep into the system. The deepfakes in that case were crude enough that the judge could spot them, but the technology has already advanced to the point where many of us would struggle to tell the difference. Thankfully, Louisiana and the Federal Courts [details in Judge Schlegel's post] are beginning to sketch a better path, but we are racing the clock. Because once trust is broken, no amount of technology can put it back together again.
Deepfakes in the courtroom are no longer hypothetical. They are here. And the clock is ticking. The crude fakes of today will soon look primitive, yet they are already capable of wasting judicial resources and undermining trust.
Thanks to the Media Law Resource Center (MLRC) MediaLawDaily for the pointer.
UPDATE: Here's what the judge said to explain why she didn't refer the matter to the prosecutor's office:
The Court finds that referral for criminal prosecution is not appropriate. Plaintiffs' submission of fabricated evidence brings to the Court's mind two Penal Code statutes [concerning perjury and forgery]…. The Court finds that a sanction referring Plaintiffs for criminal prosecution is simultaneously too severe and not sufficiently remedial. The sanction is too severe as even being the subject of a criminal investigation may lead to social repercussions that persist after the criminal proceedings close.
This civil judicial officer does not have the expertise and experience to balance all relevant considerations to determine whether a matter should be referred to the District Attorney for a criminal investigation. At the same time, a referral would do little to address the harm that Plaintiffs have caused in this civil proceeding.