The Volokh Conspiracy
Mostly law professors | Sometimes contrarian | Often libertarian | Always independent
Hawaii Deceptive Election-Related Deepfake Disclaimer Requirement Struck Down,
in a lawsuit brought by the Babylon Bee.
Judge Shanlyn Park's order yesterday in Babylon Bee, LLC v. Lopez (D. Haw.), held unconstitutional Hawaii's Act 191. That law provided that "no person shall recklessly distribute… materially deceptive media in reckless disregard of the risk of harming the reputation or electoral prospects of a candidate in an election or changing the voting behavior of voters in an election." "Materially deceptive media" is defined as "[a]ny information, including any video, image, or audio, that"
- Is an advertisement;
- Depicts an individual engaging in speech or conduct in which the depicted individual did not in fact engage;
- Would cause a reasonable viewer or listener to believe that the depicted individual engaged in the speech or conduct depicted; and
- Was created by [certain digital technologies].
"Advertisement" is in turn defined as "any communication, excluding sundry items such as bumper stickers, that"
- Identifies a candidate directly or by implication, or identifies an issue or question that will appear on the ballot at the next applicable election; and
- Advocates or supports the nomination, opposition, or election of the candidate, or advocates the passage or defeat of the issue or question on the ballot.
The law provides a safe harbor for people who distribute material that "includes a disclaimer informing the viewer that the media has been manipulated by technical means and depicts appearance, speech, or conduct that did not occur." But for video and images, the disclaimer must, among other things (and to simplify slightly),
- Appear throughout the entirety of the video [for videos];
- Be in letters at least as large as the largest size of any text communication.
For pure audio, the disclaimer must be read "[a]t the beginning and end of the media in a clearly spoken manner."
Also,
If the media was generated by editing or creating new media from an existing video, image, or audio, the media shall include a citation directing the viewer or listener to the original sources from which the unedited version of the existing videos, images, or audios were obtained or generated.
These restrictions, which carry criminal and civil penalties and also authorize private lawsuits, apply "between the first working day of February in every even-numbered year through the next general election."
The court concluded that this was a content-based restriction on speech that didn't fit within any First Amendment exception:
[T]he Supreme Court has "reject[ed] the notion that false speech should be in a general category that is presumptively unprotected." U.S. v. Alvarez (2012). Instead, it has permitted restrictions on the content of speech in a "few historic and traditional categories [of expression] long familiar to the bar." Among these categories of unprotected speech are defamation and fraud. However, unlike defamation and fraud—which typically require a showing of actual or tangible harm, Act 191 goes further to prohibit the distribution of materially deceptive media "in reckless disregard of the risk of harming the reputation or electoral prospects of a candidate in an election[.]" By its plain language, Act 191 extends beyond those traditional categories of expression, requiring only a speculative and unquantifiable "risk" of harm.
The law therefore had to pass strict scrutiny—i.e., had to be "narrowly tailored to serve a compelling state interest"—and the court concluded that the law wasn't narrowly tailored:
To be narrowly tailored, a "curtailment of free speech must be actually necessary to the solution." "If a less restrictive alternative would serve the Government's purpose, the legislature must use that alternative." …
Here, State Defendants do not contest that less restrictive, speech-neutral alternatives exist, only that such alternatives would be "less effective" than Act 191. The legislative history of Act 191 does not indicate whether the Legislature considered less restrictive alternatives in enacting Act 191. Instead, the parties rely, in large part, on evidence in the form of vying expert declarations to support their respective positions. Both parties' experts identify counter speech and increased digital and political literacy as potential alternatives to mitigating the impacts of political deepfakes, with differing takes on their efficacy.
With respect to counter speech as a less restrictive alternative, Plaintiffs argue that Hawai'i "could counter deceptive speech with factual speech of its own," or it could start a government database or committee dedicated to tracking and flagging materially deceptive content. The parties' experts offer competing opinions with respect to the efficacy of counter speech as a solution. While State Defendants' expert explains that political deepfakes are "sticky," "highly realistic," and can spread too quickly for counter speech to be effective post-dissemination, Plaintiffs' expert counters that "the arguments made against political deepfakes (that they are convincing, are sticky, and spread quickly) also apply to written misinformation," making political deepfakes nonunique from other forms of misinformation, and that studies indicate that counter speech in the form of "crowd-sourced fact checking[,] reduces engagement with and diffusion of misinformation and can help identify misinformation at scale." Despite the competing evidence, this Court finds that targeted counter speech appears to be a viable, less restrictive alternative to Act 191 because it serves Hawaii's purpose and would not be overinclusive.
Next, with respect to increased electoral literacy as a less restrictive alternative, Plaintiffs argue that Hawai'i could launch educational campaigns on how to spot deceptive political content. The parties' experts appear to agree that such an alternative would be effective at mitigating the effects of political deepfakes. According to Plaintiffs' expert, "[r]esearch suggests that promoting digital and media literacy, as well as increasing political knowledge, will likely be more effective than bans in mitigating the harms associated with false information spread through political deepfakes."
Despite State Defendants' contention that educational campaigns would be "less effective" than Act 191 due to the nature of political deepfakes, State Defendants' expert agrees that "with strengthened media literacy skills and greater political sophistication, people can be more likely to identify political deepfakes and less likely to believe that they are accurate." State Defendants' expert's only reservation with increased literacy as a viable alternative appears to be that developing such skills in the electorate "would require a larger investment of resources" compared to a ban. Such a reason has been rejected by the Supreme Court for it has made clear that "[t]he First Amendment does not permit the State to sacrifice speech for efficiency." Thus, State Defendants have failed to demonstrate that increasing the digital and political literacy of the electorate through educational campaigns would be less effective than Act 191.
In addition to the less restrictive alternatives identified by the parties' experts, Plaintiffs argue that Hawai'i also has existing laws that it could enforce to protect electoral integrity, or alternatively, that Act 191 could be amended to limit potential plaintiffs to candidates actually harmed by unprotected false speech, thereby more closely mirroring defamation law. With respect to the former alternative, Plaintiffs assert that Hawaii's election fraud law, for example, already regulates the knowing publication and/or distribution of false information about the "withdrawal of a candidate at the election" or "about the time, date, place, or means of voting." Plaintiffs also argue that Hawai'i has additional existing statutory causes of action—such as privacy torts, copyright infringement, or defamation—that already address some of the alleged harms that materially deceptive media pose.
State Defendants' briefing is not directly responsive to these arguments. They, however, concede elsewhere that "much of what Act 191 restricts would also constitute unprotected defamation," which would, in this Court's view, conceivably be covered by the State's existing defamation laws. Because State Defendants have introduced no evidence addressing this issue, the Court finds that they have failed to demonstrate that existing laws are insufficient to deal with the purported risk of political deepfakes and generative AI technologies on the integrity of Hawai'i elections. Altogether, this Court concludes that Act 191 fails narrow tailoring.
And the court concluded that Act 191 was also unconstitutionally vague:
At its core, Act 191 prohibits the distribution of "materially deceptive media in reckless disregard of the risk of harming the reputation or electoral prospects of a candidate in an election or changing the voting behavior of voters in an election." The consequences of imposing a vague standard are two-fold. First, Act 191's "reckless disregard of the risk of harming" or "changing" standard muddies the line between compliance and noncompliance by forcing speakers to base their conduct on their own risk assessment, rather than on clear, objective standards.
Second, Act 191 introduces an inherently subjective assessment for enforcement agencies. Rather than require actual harm, Act 191 imposes a risk assessment based solely on the value judgments and biases of the enforcement agency—which could conceivably lead to discretionary and targeted enforcement that discriminates based on viewpoint. In this case, the ultimate consequence of indeterminate compliance lines and the risk of discriminatory enforcement is a chilling effect on First Amendment speech.
Mathew W. Hoffmann and Philip A. Sechler (Alliance Defending Freedom) and Shawn A. Luiz represent the Babylon Bee.
Editor's Note: We invite comments and request that they be civil and on-topic. We do not moderate or assume any responsibility for comments, which are owned by the readers who post them. Comments do not represent the views of Reason.com or Reason Foundation. We reserve the right to delete any comment for any reason at any time. Comments may only be edited within 5 minutes of posting. Report abuses.
Please to post comments
Well yes, obviously what America needs is more deepfake political ads.
Insightful, as always. Or are you a deepfake? Well, maybe just shallow.
Wouldn't intermediate scrutiny apply per the controlling opinion in Alvarez?