No § 230 Immunity for Meta's AI-Generated Ads
From Tuesday's decision by Chief Judge Richard Seeborg (N.D. Cal.) in Bouck v. Meta Platforms, Inc.:
This case is the latest installment in an expanding genre: suits against social media companies for participating in the creation and promotion of fraudulent advertisements. Plaintiffs here are victims of a pump-and-dump scheme involving shares of a Chinese penny stock, China Liberal Education Holdings Ltd. ("CLEU"). The scammers targeted Plaintiffs on Facebook and Instagram (both Meta products) through advertisements for investment groups promising handsome returns. When a plaintiff clicked on the ad, he was led to a group on WhatsApp (another Meta product) wherein the scammers would persuade the plaintiff to purchase CLEU shares. Those shares ended up being nearly worthless.
Plaintiffs sued for, among other things, aiding and abetting fraud and negligence, and the court rejected Meta's attempt to get the case dismissed on § 230 grounds:
Section 230(c)(1) of the Communications Decency Act provides that "[no] provider … of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider." … The statute defines the term "information content provider" to "mean[ ] any person or entity that is responsible, in whole or in part, for the creation or development of information provided through … any other interactive computer service." Therefore, if Meta was sufficiently involved in the "creation or development" of the fraudulent ads, then those ads were not just "provided by" the scammers—they were also provided by Meta….
Plaintiffs aver that Meta contributed materially to the fraudulent ads through three tools offered in its Ads Manager suite. The first is called "Flexible Format." Plaintiffs explain that through Flexible Format, "Meta automatically optimizes the ad and shows it in the format that Meta predicts may perform best" by "selecting the specific images and other content that will be included, the layout, the platform (Facebook or Instagram), and how the ad will be displayed to a particular user (e.g., in the user's feed, as a story, etc.)."
The second tool is called "Dynamic Creative." Dynamic Creative "takes multiple media, such as images and videos, and multiple ad components, such as images, videos, text, audio, and calls-to-action, and then mixes and matches them in new ways to improve … ad performance." In that way, "[i]t allows the advertiser to automatically create personalized creative variations for each person who views the ad, with results that are scalable."
The third tool is called "Advantage+ Creative." Advantage+ Creative uses generative AI to apply "creative enhancements" to optimize advertisements. These "enhancements" include AI-generated text and images, which alter the contents of the advertisements to improve performance. "The alterations may include modifications to images (such as applying different text overlays or modifying the image background), generating variations of the ad's text to target different audiences, and inserting 'Call to Action' buttons, such as a link to purchase a product or join a WhatsApp group." According to the complaint, "[t]he CLEU scammers used these advertising tools to deploy an array of advertisements that were optimized to target a range of different Facebook and Instagram users" including "at least 86 different variations of ads featuring Ms. Subramanian."
These averments, taken as true, evidence a fact dispute over whether Meta "contribute[d] materially to the alleged illegality of the advertisements." The alleged illegality stems from the advertisements' content—i.e., the false statements made to Facebook and Instagram users that induced them to click on the ads. Plaintiffs have averred that Meta participated in the construction of the ads by literally generating, using artificial intelligence, the images and text in the advertisements. That degree of participation is not protected by section 230.
Courts in this district have reached the same result on comparable facts. In Forrest v. Meta Platforms, Inc. (N.D. Cal. 2024), a prominent businessman sued Meta for its role in creating advertisements in which scammers impersonated him endorsing sham cryptocurrency investments…. The district court … conclud[ed] that the plaintiff had raised a "quintessential factual disagreement" concerning Meta's role in the creation of the ads by averring that Meta "drives and ultimately determines what the completed, paid-for ads will look like" and offers generative AI tools that "automatically optimize[ ] ads to versions the audience is more likely to interact with."
If anything, Plaintiffs' averments are stronger here. The district court in Forrest accepted that optimizing the appearance of an ad to drive engagement was enough of a contribution to the ads' illegality to preclude section 230 immunity. Here, in addition to averring facts which, if proven, would establish that Meta altered the ads' appearance to maximize impressions, Plaintiffs have averred that Meta's tools allowed the scammers to produce "AI-generated text and images" for use in the ads through its Advantage+ Creative tool. That is more than enough to aver "that the tools affect ad content in a manner that could at least potentially contribute to their illegality."
Meta contends that these tools are "neutral" and that while they offer a menu of options to advertisers, the offending content was exclusively provided by the scammers…. [But] Plaintiffs have averred that Meta created the offending information by generating some of the false statements that tricked them into the investment scheme…. Plaintiffs aver that the scammers used Meta's Advantage+ Creative tool which, as explained, uses artificial intelligence to enhance whatever message the user inputs. If a user, for example, tells the tool that he is interested in an ad promising astronomical weekly investment returns, Advantage+ Creative will spin up a slew of ads that include the provided language and other language, images, and videos it decides will be effective in promoting the user's chosen message.
In fact, a journalist from Reuters ran an experiment in which he told Advantage+ Creative that he wanted an ad asking users if they were "interested in making 10% weekly returns." Advantage+ Creative generated a slew of ads saying just that and new ads with language like "Tired of living paycheck to paycheck? Break the cycle and start earning steady weekly income with our proven system." The reporter did not come up with that (patently fraudulent) language; it was all Meta.
Because the complaint avers that the scam CLEU ads were created using these tools, it is at least plausible that some of the illegal content (i.e., the fraudulent statements in the ads) was created by Meta, not by the scammers. Without question, Advantage+ Creative and the other tools in Meta's advertising suite would not have come up with that language without the inspiration from the scammers, but that language is still the creation of Meta.
At bottom, the question in this case—as in most section 230 cases—is whether Plaintiffs are attempting to hold Meta vicariously liable for the actions of its users. They are not. They do not aver that Meta "passively acquiesc[ed]" to the fraud; they allege Meta worked with the scammers to gin up the offending posts. If those averments are borne out by the evidence, it will be enough to disrobe Meta of section 230 immunity.
The court also held that plaintiffs stated a claim as to aiding and abetting fraud:
"California has adopted the common law rule that [l]iability may … be imposed on one who aids and abets the commission of an intentional tort if the person … knows the other's conduct constitutes a breach of a duty and gives substantial assistance or encouragement to the other to so act." … [Plaintiffs argue] that when Meta saw the ads in its ad review process, Meta acquired actual knowledge of their fraudulence. To be sure, in many cases a defendant could not be charged with actual knowledge of fraud simply because the fraud passed through a routine review process. For that reason, many cases arising in the financial fraud context have required a plaintiff bringing an aiding and abetting claim to show that the defendant had some extra knowledge about the primary fraudster in order to create an inference that the defendant knew of the fraud and passed it through the review process anyways.
Here, by contrast, no extra knowledge is required. That is because the advertisements are facially ridiculous. Take just one example from the complaint:
That is Savita Subramanian, one of Wall Street's most respected market observers, purporting to offer stock tips in a WhatsApp group. Though Ms. Subramanian is employed by Bank of America, the trading training is being promoted by something called "AI Investment." She is advertising daily potential returns that are roughly three to four times the average annual return of U.S. equity markets, all for free. Even a cursory look would warrant suspicion that the ad is fraudulent. Meta cannot, with a straight face, claim otherwise. If Plaintiffs succeed in convincing a jury that this ad (and others that are equally preposterous) passed Meta's ad review process, the jury would be entitled to infer that Meta had actual knowledge of the fraud at the time the ads went out to its users.
Meta's response to this theory of knowledge is confounding. It claims that it was not aware of the nature and content of the ads (or at least that Plaintiffs did not aver that it was) because its ad review process "rel[ies] heavily on automated technological systems" and "may not detect all policy violations." Yet Meta does not explain why that matters. It was Meta's decision to use technological review tools to screen ads, and it does not now get to claim it had no idea what was going on because it tasked some software program with doing the first pass.
In any event, Meta plausibly acquired knowledge that it was aiding and abetting a fraud well before the ad passed through a review system. As explained, Plaintiffs have plausibly averred that the scammers used Meta's generative-AI tools for advertisers to perpetrate the fraud. At the moment a scammer asked Advantage+ Creative to generate an ad using a celebrity, a secret chat room, and the promise of unfathomable riches, there is at least a fact question on whether Meta acquired knowledge that it was aiding and abetting a fraud….
And the court allowed plaintiffs' negligence claim to go forward, despite the economic loss rule:
"The economic loss rule provides that 'there is no recovery in tort for negligently inflicted "purely economic losses," meaning financial harm unaccompanied by physical or property damage.'" "It applies when 'the parties are in contractual privity and the plaintiff's claim arises from the contract (in other words, the claim is not independent of the contract).'"
However, as explained, the contract between the parties does not cover this course of conduct. Indeed, it does not impose on Meta any obligations to police conduct on its platforms as consideration for its users' contractual promises. Therefore, permitting a tort action would not cause the law of contract and the law of tort to dissolve into each other—the prevention of which is the stated rationale of the economic loss rule.
The economic loss analysis seems wrong to me, since S. Cal. Gas Leak Cases (Cal. 2019) makes clear that in California the economic loss rule applies even to parties who have no contract between each other. But maybe I'm mistaken; again, I'd love to hear what others think.
