The Volokh Conspiracy
Mostly law professors | Sometimes contrarian | Often libertarian | Always independent
AI-Generated Porn … Litigation Filings (from a Prominent Plaintiffs' Class Action Firm in Lawsuit Against OnlyFans)
From Skadden Arps filing in N.Z. v. Fenix Int'l Ltd. last week, involving a class action against OnlyFans:
Plaintiffs' Opposition … cites 18 cases in attempting to argue that the Court should decline to partially reconsider its FNC [forum non conveniens] Order, or certify an interlocutory appeal, in response to the California Supreme Court's recent decision in EpicentRx, Inc. v. Superior Court (Cal. 2025). As discussed in detail below and in the accompanying Declaration of Or-el S. Vaknin, Plaintiffs attributed false, AI-hallucinated quotations or holdings to at least 11 of those cases. This is the third distinct filing over a monthlong period in which Plaintiffs have used non-existent quotations to attempt to defeat Fenix's requests for relief. This pattern of submitting false, AI-generated law is an "abuse of the judicial system" that harasses Fenix and wastes the Court's time and resources. It must be stopped. The Court should disregard Plaintiffs' latest tainted efforts and grant the Motion.
And another filing:
Although Plaintiffs had two months to craft their 11,515-word brief, they were evidently unable to find legitimate legal authorities supporting their arguments. On at least 20 occasions, Plaintiffs' Opposition cites imaginary caselaw, quotes invented language in real cases, summarizes non-existent court holdings and analysis, or responds to arguments Fenix did not make. (See Declaration of Or-el Vaknin (compiling examples).) For example, Plaintiffs ….
Law.com / The Recorder (Kat Black) passes along this statement from "Robert Carey, a Hagens Berman partner based … who is representing the plaintiffs," who "said that the briefs contained 'sections drafted by co-counsel outside our firm.'"
In those sections, quotation marks were improperly placed on the holdings of real cases, and inaccurate statements and citations appeared, including one citation to a case that did not exist. Our review did not catch those errors, and we take responsibility for that oversight. It should not have happened.
Law.com goes on to say:
The co-counsel, according to the statement, was a "Yale Law School graduate and trusted colleague who has provided excellent work for over a decade"—and was navigating an "intense family crisis" with her father, who had entered hospice earlier this summer after a long-term battle with Parkinson's disease and other medical conditions.
And it quotes more from the Carey statement:
While managing his care from afar, she did not share with us how overwhelming the situation had become. Under that strain, she turned to an AI tool to help polish her drafts, not realizing that in doing so, the tool had introduced or altered citations and text. Because she did not alert us to her situation, and her material arrived late in the process, our usual review protocols could not be fully applied. She will demonstrate to the court that the flawed content was preceded by careful and responsible research, writing and development of arguments.
Law.com also quotes more from a separate statement by Carey in a phone interview, in which Carey also said "the plaintiffs' legal team will request to file corrective briefs that remove inappropriate quotations and statements and explain what happened to the court":
It's a mistake. It shouldn't have happened. It's our responsibility to make sure briefs are right no matter who puts the sections together. But it was a little difficult when it's a person we've had a longstanding relationship with, and she's a high-level lawyer and brief writer, and she is helping us finalize the briefs that we just didn't think we needed to check her work….
And so, lesson learned. We apologize to our opposing counsel … and we're going to apologize to the court and see what we need to do to make it right. But our firm has filed thousands and thousands of high-level briefs, and this is not our practice. If we knew what was going on, we would've stopped it—but again, a lesson learned.
The underlying allegations, by the way, also related to alleged falsehoods:
OnlyFans, a social media platform known almost exclusively for hosting sexually oriented content, has hundreds of millions of users (called "Fans") who pay for the privilege of communicating directly with specific people who post content on the platform (called "Creators") on a personal (indeed, often an intimate and/or romantic) level. But instead of interacting with a specific Creator, Fans end up—unknowingly and without their consent—communicating with professional "chatters" hired to impersonate that Creator in order to convince Fans to spend even more money on the platform.
Chatters are often hired by self-styled "management agencies" operating OnlyFans accounts on behalf of multiple Creators, at the request of and with the consent of the Creators. These agencies hire veritable fleets of Chatters—often from countries like the Philippines and Venezuela, where they can get low-cost, yet well-educated, workers who can convince Fans they are engaged in "authentic" communication with a particular Creator.
In addition to the blatant deception and fraud, the "Chatter Scams" involve massive breaches of confidentiality and privacy violations in which intimate communications and private and/or personal information about Fans—including photos and videos—are distributed and/or accessible to numerous unauthorized parties.
OnlyFans knows about the agencies perpetrating the Chatter Scams; indeed, it has co-hosted events with at least one agency named as a defendant in this Complaint….
Editor's Note: We invite comments and request that they be civil and on-topic. We do not moderate or assume any responsibility for comments, which are owned by the readers who post them. Comments do not represent the views of Reason.com or Reason Foundation. We reserve the right to delete any comment for any reason at any time. Comments may only be edited within 5 minutes of posting. Report abuses.
Please
to post comments
Iswydt
Would be interesting to examine the only fans TOS (and creator's contract). Wouldn't surprise me if this is perfectly fine. OF certainly has no reason to complain.
The headline suggested something else was being generated by AI.
Agreed. From the headline, I expected an entirely different case. This one seems much more like the long line of cases that EV blogs about . . . lawyers doing shoddy work, relying on AI and getting caught with their proverbial pants around their ankles. (Or is it, with their pants around their proverbial ankles?)
For some background on the situation, see I Went Undercover as a Secret OnlyFans Chatter. It Wasn’t Pretty by Brendan I. Koerner.
Simps gonna simp.
OnlyFans, a social media platform known almost exclusively for hosting sexually oriented content, has hundreds of millions of users (called "Fans") who pay for the privilege of communicating directly with specific people who post content on the platform
This is a lie now. The fashion is to have a management company handle your interactions, and use a patter optimized for sales.
Every lady undergoes a sea change, suddenly one day their posts switch to all "hey, babe! wat r u up 2 rn?"
At that point, you're talking to a robot or a guy in India or a robot in India, just not the sweet lady you're pay pigging for.
"AI-Generated Porn"
We're waiting for the state of AI-generated child porn to be settled. I think the UK is going to ban it outright, which they can do. No real justification needed. In America speech restrictions need justification. We are torn between "for the children!" and "it's gross!" The "for the children" justification for banning doesn't apply to fakes. The "it's gross" justification does.
Our permissible jurisprudence on pornography and speech has really painted itself into a corner on this issue. Child pornography has always been distinguished from adult pornography with the sole reason that the production of it constitutes the sexual abuse of children and that the consumption of it facilitates that abuse.
There are draconian punishments for consumption of child pornography that makes me think that is a pretext. After all, casual drug use indirectly benefits the drug lords selling the drugs, but we've always assigned culpability in a rational way. In this context we hammer the consumers as if they were themselves the perpetrators.
In any event, the whole rationale collapses with AI child porn. There are no child victims. The statement that "it's gross" (although undoubtedly true) has recently been held to not justify bans on certain conduct (see Lawrence v. Texas) let alone a ban on certain speech.
In fact, BECAUSE we think speech is gross is a reason why it would be extra protected. If we can ban AI child porn because it is "gross" why couldn't a community ban certain or all types of adult porn on that same rationale? We've said adult porn is protected speech; a court would have to come up with a special good for one ride only pleading coupon to reach a result banning AI child porn.
I think it will anyways.
" There are no child victims."
This does not stop prosecutions based solely on talking/texting with an adult FBI agent pretending to be a 14 year old girl. No possible underage sex could result but its common.
I predict "its AI" will fail as a defense, even if the"whole rationale collapses".
That's a mistake of fact which does not excuse the criminal mens rea of the defendant who wanted to have sex with a real 14 year old girl.
With AI, there is no intent to harm anyone and there is no harm to anyone.
"mistake of fact which does not excuse the criminal mens rea "
Likewise for AI child porn. The defendant wanted to view a child having sex, he was just mistaken on it being real.
People who create or view child porn are going to be punished, no matter if its AI or real. Legal doctrine will be created/modified as needed.
"The defendant wanted to view a child having sex"
Now you are changing the facts. The hypo was that it was all AI from start to finish. No kids, no intent to harm kids, no intent to view kids. The creation is AI, it is marketed as AI, viewed and intended to be viewed as AI.
Still wanted to view children having sex. AI creates images from composite of actual images.
Point is that the US public will not tolerate child porn no matter how created, legislatures will follow and no court will say otherwise.
You keep repeating that, and I said it in my first post in this thread about it. Courts will try to invent some doctrine to stop it.
But none of the ones that they have fit at all. And the ones that they have seem to fully protect it. It is speech and their stated rationale for previously banning it no longer applies. Courts would be forced to become outcome oriented and create a new rationale.
View artificial, nonexistent children. (That's the argument, at least.)
Here's the rationale.
The percentage of adults who experience at least some sexual attraction to children and adolescents is probably far higher than anyone realizes. If you spend a few minutes actually thinking through the conditions under which early hominids were evolving 500,000 years ago, the average life expectancy was 20 and your biological imperative was to reproduce as early and often as possible. So naturally you would be attracted to partners the moment they became fertile, if you even waited that long. It was a matter of survival.
That's no longer the world we live in, but the desires became hard wired. I would argue that a great many toxic behaviors that human nature seems prone to have a valid evolutionary basis if you spend a few minutes thinking about what conditions were like when early humans were evolving.
Most people have been socialized to suppress and not act on those desires, but they haven't gone away.
So the rationale for suppressing child porn, whether real or AI generated, is to provide further incentive to suppress desires that are now toxic. Those desires, if put into practice, harm not just the children involved, but create social harm for other people too. Abused children have higher rates of criminality, substance abuse, and violent tendencies.
Now, since biology plays a role, I think there is room for some compassion for those unable to suppress their desires; not everyone is as strong as we would like them to be. But the higher priority is protecting children, and the rest of society.
But murder sims are a-ok!
I tried to calculate it, but it's impossible. I'm pretty sure I've murdered ten million sentient creatures in simulations, one by one.
And that doesn't count the estimated 10 million soldiers on various Death Stars I've popped over the years. But that's ok because they're the bad guys mass killing others?
Not according to the President of the United States.
And I've also been the tyrannical emperor of the galaxy myself, though I don't recall ordering genocide. Maybe I'm a tyrant with a heart!
One problem at a time.
Yes, I think humanity's love of gratuitous violence, bullying of the weak, and stealing whatever isn't nailed down is also a toxic behavior that harks back to earlier days when humans had to do that to survive. I think the fact that there is no political support to suppress violent simulations the way there is child pornography simply reflect that the love of gratuitous violence is simply too deeply engrained. But I also think it threatens the human race if it continues unchecked.
The argument can be made that violent simulations are a safe environment in which to work out one's violent fantasies, and football is too. And I suppose the same could be said of AI child porn simulations. And maybe so. But please acknowledge that violence, and child sexual abuse, are both behaviors the species would be well rid of.
Could a state ban violent movies and books under your same theory, that we have this evolutionary bias towards violence and the movies and books just add to it?
Sounds like a great rationale to outlaw dangerous speech! It's just a half step from the Egyptian military outlawing satellite dishes, because receiving CNN "without the government to contextualize it" is dangerous!
It's dangerous! Who can be against oulawing dangerous speech?
"But democracy, in the form of the jointly sovereign People, can safely wield tools of tyrants, like censorship!", say tyrants skilled at wielding transient 51% bare majorities, on a continent ruled by tyrants literally still in living memory.
Maybe. The question is going to come down to how high are the stakes. The mere fact that something is dangerous isn't enough to ban it all by itself; it's a question of how dangerous. Is it enough of a danger that the ban justifies invading the liberty of those who wish to partake in it.
At the moment, there's a strong consensus that the harm done by child pornography is enough to justify a flat ban, whether real children were harmed in making it or not. I agree with Bob from Ohio on the practicalities; the legislature will ban AI generated porn and the courts will uphold the bans just because the social consensus against it is just too strong.
There is no such consensus against violent books and movies, though I have heard some on both right and left argue that there should be. It would require a complete sea change in public attitudes for such a ban to be even remotely politically feasible.
But if that sea change happens, and violence occupies the same place in public opinion that child pornography does, then yes, it probably could be.
You make an observation that might be appropriate for a legislature. I'm asking why that makes a difference as a constitutional matter.
When it comes to speech the idea that more people are in favor of having one type of speech illegal than another type of similar speech is a good reason why the restrictions are unconstitutional.
I don't disagree with you as a practical matter. Why does the Court find a right to gay marriage but no right to polygamous marriage? The answer is because gay marriage is more in line with realpolitik.
Because the Constitution was written in broad generalities with the idea that the judiciary and the legislature would fill in the blanks, and that social consensus would be part of the equation (though I agree with you not the only part of the equation). That's true of even constitutional provisions that appear to be crystal clear. A 9/11 hijacker is not going to get criminal charges dismissed on First Amendment grounds just because he was engaged in the free exercise of his religion. Neither will a prison inmate be permitted to keep a loaded Uzi in his cell even though that would be keeping and bearing arms.
You and I may disagree on where to draw the line, but there is a line that separates speech that is too destructive and dangerous from First Amendment protection. I think all child pornography is outside the line, whether generated by AI or made with real children. The First Amendment is a broad generality, and both the legislature and the courts have to fill in the blanks. Which is why originalism is completely untenable as a constitutional philosophy. A certain amount of living constitutionalism is inevitable.
Would Justice Stewart recognize AI Porn if he saw it?