The Volokh Conspiracy
Mostly law professors | Sometimes contrarian | Often libertarian | Always independent
No Preliminary Injunction in Challenge to Minnesota Election Deepfake Law, but Challenge Goes On
From Kohls v. Ellison, decided today by Eighth Circuit Chief Judge Steven Colloton, joined by Judges James Loken and Duane Benton:
In May 2023, the Minnesota legislature enacted a law regulating deep fakes. The relevant text reads as follows:
A person who disseminates a deep fake or enters into a contract or other agreement to disseminate a deep fake is guilty … if the person knows or reasonably should know that the item being disseminated is a deep fake and dissemination:
(1) takes place within 90 days before an election;
(2) is made without the consent of the depicted individual; and
(3) is made with the intent to injure a candidate or influence the result of an election….
A deep fake is defined as "any video recording, motion-picture film, sound recording, electronic image, or photograph, or any technological representation of speech or conduct … that is so realistic that a reasonable person would believe it depicts speech or conduct of an individual who did not in fact engage in such speech or conduct." … [A later] amendment expanded the scope of the prohibition to include the periods "within 90 days before a political party nominating convention" or "after the start of the absentee voting period." As amended, the statute further provides that a state or local candidate who violates the law must forfeit any nomination or elected office, and is disqualified from any future appointment to office.
Kohls is a political commentator who produces and publishes parodies on social media. On July 26, 2024, Kohls broadcast on the YouTube website a video generated by artificial intelligence that depicted a likeness of Vice President Harris making statements that she never made. The video was labeled as "PARODY" and included a disclaimer stating that "[s]ounds or visuals were significantly edited or digitally generated." Elon Musk shared the video on the "X" social networking service but did not convey that the video was a parody or was generated artificially. Franson, a member of the Minnesota state legislature, shared Musk's post on her own "X" account. She, too, did not communicate that the video was a parody.
The district court rejected Kohls' and Franson's request for a preliminary injunction, and the Eighth Circuit upheld the denial. It agreed with the district court that Kohls lacked standing:
To qualify as a deep fake under the statute, a video must be "so realistic that a reasonable person would believe it depicts speech or conduct of an individual who did not in fact engage in such speech or conduct." By labeling his videos as parody, however, Kohls communicates that statements in the videos "cannot reasonably [be] interpreted as stating actual facts." Kohls's videos, labeled as parodies, are not deep fakes under the statute, so he is not injured by any threat of enforcement….
Kohls further maintains that he is injured by the threat of enforcement against third parties. His verified complaint alleges that "[t]he chilling effect and enforcement of the law will dissuade others from sharing his content and preclude him from earning a living, which he currently does via monetization of his content on YouTube and X."
"Where traceability and redressability depend on the conduct of a third party not before the court, 'standing is not precluded, but it is ordinarily substantially more difficult to establish.'" A plaintiff must show that third parties will act "in such manner as to produce causation and permit redressability of injury." A permissible theory of standing "does not rest on mere speculation about the decisions of third parties; it relies instead on the predictable effect of Government action on the decisions of third parties."
The record is insufficient to establish standing on this theory. The evidence showed that Kohls's video of July 26 depicting a likeness of Vice President Harris "was retweeted over 240 thousand times." Kohls did not present evidence that others were deterred from sharing the video or that he lost income from any such reduction in sharing. A plaintiff cannot establish standing based merely on an unsupported assumption that some users of YouTube or X might decline to share a video because of the Minnesota statute.
The court also agreed with the district court that Franson did have standing, because she "reshared Kohls's video depicting Vice President Harris without any accompanying label or disclaimer," which "was arguably proscribed by § 609.771, because a reasonable person could believe that the video depicted actual speech or conduct of Harris in which she never engaged." But it concluded that the district court didn't abuse its discretion in denying Franson a preliminary injunction, because of Franson's delay in suing:
A plaintiff's delay may justify denial of preliminary injunctive relief where it is not adequately explained, and Franson has not provided a sufficient explanation for her sixteen-month delay.
On appeal, Franson argues that she really did not delay for sixteen months because the legislature amended the 2023 statute in July 2024. The 2024 amendments, however, altered only the timing and penalty provisions of the statute. They did not change the statute's basic prohibition on dissemination of deep fake videos, the enforcement of which Franson seeks to enjoin. Franson's proposed conduct was arguably proscribed and subject to criminal penalties ever since May 2023. She voted for the 2023 law as a member of the legislature and possessed the complete factual predicate of her suit at that time. The district court found that Franson disseminated content "as early as 2021" that arguably would be proscribed by the statute. The court did not err in finding that nothing about the 2024 amendments changed the basis for Franson's alleged fear that her speech would be punished under the statute.
Franson disputes this conclusion on the ground that there were no federal elections in 2023 about which she sought to disseminate deep fake content. But there were state and local elections during 2023. And Franson, a member of the Minnesota legislature, alleged without limitation that it was her practice to disseminate "videos featuring the likeness of real politicians for comedic or satirical effect" to her "constituents, colleagues, and ideological allies." The district court did not err in construing Franson's claim to encompass content concerning state or local elections….
The district court entered its order on preliminary relief in January 2025, and the case likely could have proceeded toward a final resolution on the merits in the year that has elapsed since. But Kohls and Franson appealed the order denying preliminary relief, and then stipulated that the case should be stayed in the district court pending disposition of this appeal. At oral argument, counsel explained that the plaintiffs sought "guidance" from the court of appeals on the merits of their constitutional claims. This court, however, does not sit to dispense guidance on matters that are unnecessary to a decision.
We conclude here only that the district court did not abuse its discretion in denying extraordinary preliminary relief in light of the delay in bringing the request. The purpose of interim equitable relief, where appropriate, is to balance the equities as the litigation moves forward, (per curiam), and the district court did not abuse its discretion in concluding that Franson's delay weighed definitively against her request.
In further proceedings, the district court may consider any additional evidence regarding whether Kohls has standing under Article III and, if there is standing, whether he is entitled to relief on the merits. The court also may address whether Franson is entitled to permanent injunctive relief on the merits, despite her delay in seeking preliminary relief, under the different considerations that apply to the question of permanent relief.
Peter J. Farrell of the Minnesota Attorney General's office argued on behalf of the state.
Editor's Note: We invite comments and request that they be civil and on-topic. We do not moderate or assume any responsibility for comments, which are owned by the readers who post them. Comments do not represent the views of Reason.com or Reason Foundation. We reserve the right to delete any comment for any reason at any time. Comments may only be edited within 5 minutes of posting. Report abuses.
Please to post comments
The core value of democracy is the consent of the people though their ability to vote for candidates that will represent them best. Deepfakes cause great harm to this process by confusing the people into voting for people who they might otherwise vote against. All of our civil rights have some limits, and for this one the damage to democracy is just too high.
Example: You are pro 2A. You get most of your political info from social media (dumb but many do). Candidate A produces deepfake videos of Candidate B bashing the 2A. You vote for Candidate A. However in reality Candidate A is anti-2A. To me this is a grave assault on democracy.
re: "Deepfakes cause great harm to this process"
You've claimed this before without providing any actual evidence to support that claim. So far as I've seen, you've also failed to respond to the considerable counter-evidence showing that voters are not that stupid and that these "deepfakes" are no more harmful than photoshopped pictures, manually-edited pictures, foreign social media "influencers", selectively edited radio quotes, outright misquotes or any of the other speech-based "threats to democracy" over the last two centuries.
Voters are not in fact stupid. Voters are generally quite capable of seeing through these 'fakes'. A minority of voters might be fooled but readers are most likely to fall for a particular fake when it reinforces their priors - and thus the fake is not going to change their planned vote.
The core value of democracy is the consent of the people through their ability to publicly criticize office holders and candidates, including via ridicule. This is obviously the type of law that will be disparately applied against dissidents.
So far as I've seen, you've also failed to respond to the considerable counter-evidence showing that voters are not that stupid and that these "deepfakes" are no more harmful than photoshopped pictures, manually-edited pictures, foreign social media "influencers", selectively edited radio quotes, outright misquotes or any of the other speech-based "threats to democracy" over the last two centuries.
Rossami — That gets so much wrong it is hard to know where to begin. Not only are typical voters incapable to match the challenges of identifying deep fakes, but neither are you, nor are experts, except sometimes experts equipped with forensic apparatus.
Perhaps more to the point, you make a mistake to assume that the task to identify at sight reliable information is a method relied upon by publishers themselves. That is not how it works.
Publishers understand the impossibility to make consistently reliable decisions about the veracity of media, if all they have to rely upon is what a consumer of the media can see. So that is not how publishers make those decisions. Instead, publishers vet their sources with care. Then make reliability decisions based on a pre-evaluated provenance for each item. No reliable provenance for an item means no publication, even for items that may well be genuine.
Media consumers are rarely placed to do any kind of comparable evaluation. Like you, like me, or like the editors of the media we all consume, if we choose to trust media content unfiltered on the basis of the reliability of its sources, we are all destined to be deceived.
The significance of deep fake AI is not that it opens new possibilities for deception—although it does. The significance is that deep fake AI creates essentially effortless capacity to deceive at near zero cost, and puts that capacity in the hands of everyone, however motivated.
That is not a challenge the public life of a great nation founded on a principle of popular self-governance can likely cope with, except by a framework of law to control it. The right focus for concern thus becomes to choose a legal framework that assures controls are widely distributed among the citizens themselves, with as little possibility of government meddling as possible.
"By labeling his videos as parody"
Seems to me the court adopted a narrowing construction to make the law more likely constitutional.