Don't Let Disney Monopolize A.I.-Generated Art

The indie artists suing Stable Diffusion may not realize it, but they're doing the Mouse's dirty work.


Disney and the rest of Hollywood have been eerily quiet about the launch of Stable Diffusion, despite the fact that this open-source A.I. software will happily spit out high-quality images of iconic, copyrighted characters from comics, cartoons, and movies in response to text prompts.

But this doesn't mean they're sitting idly by and doing nothing. There's a legal battle taking shape, though it doesn't yet involve any of the larger players—at least not that we can see publicly. The first big lawsuit against Stability AI, the company that makes Stable Diffusion, is fronted by friendly "indie artist" faces who are put forth on a very slick website as fighters for a fair shake for independent creatives. Their suit takes direct legal aim at the core of how generative A.I. works.

Now Getty Images has joined the fray too, with a lawsuit against Stability AI that it told The Verge is mainly about seeking legal clarification and not so much about damages.

The mouse that ate the public domain is watching carefully, because what's at stake for it is existential.

To lay it out in terms that sound sci-fi but definitely are not: A future version of Stable Diffusion will not be a replacement for some artists whose work you like; it can replace entire studios and intellectual-property shops like Disney, Pixar, and Marvel. You'll have a movie studio's worth of creative and technical talent on your laptop, and it can keep you endlessly entertained with your favorite characters and worlds without ever sending a dollar to the owner of those characters' copyrights.

Here's the even bigger kicker: Under current law, none of this would be obviously illegal.

A Glimpse of the Future

It's easy to misunderstand the stakes in these fights if you think about generative A.I. solely in terms of who can do what with images of Mickey Mouse, Spider-Man, and other copyrighted characters and settings. Even if you've been following generative A.I. closely enough to understand that a key part of the fight is over the software's so-called style transfer abilities—the ability to mimic a particular artist's style well enough to produce an endless stream of novel works in it—you're still pulling on a single thread of a much vaster tapestry.

The potential risk to Disney and other large intellectual property holders is far graver than simply a flood of new user-generated images, memes, and video clips that are derivative of their copyrighted works and thus arguably reduce the value of the genuine article.

To see why, let's game this out.

Right now, if I give you a prompt, some parameters, and a seed number, you and I can both independently use Stable Diffusion to generate the exact same image, pixel for pixel. On a practical level, this means that with just a little bit of text, I can effectively "transmit" a very large image file to you. You might even say that if I've published that text (the prompt + seed + Stable Diffusion settings), I've published that image.

Now let's fast forward a few years to when full-scale text-to-video is in full flower. In this world, a single text prompt and seed combination might go into a ChatGPT clone, which would then produce a script, which would then get fed into a script-to-animation model, which would then produce a video.

If we both had access to the models required to make this work—let's say all the relevant models are fully open-source, like Stable Diffusion—then I could "publish" a feature-length cartoon by publishing an initial text prompt and seed combination along with the relevant workflow details and model settings. Anyone who had that text information and access to the models could then watch my cartoon.

Now imagine that this cartoon stars Mickey Mouse.

Is Disney going to sue me for publishing a few-hundred-character text prompt, an integer, and a handful of key/value pairs specifying model settings and workflow? That would be pretty absurd, even by Disney standards. It's hard to see the courts going along with it or anyone being able to enforce it if they did. We're talking about an amount of text that's probably so small I could circulate it as a Notepad.exe screen cap.

At the point that the above is feasible—a point that's coming far sooner than you can imagine—it's game over for Disney's ability to maintain its status as the world's exclusive provider of novel Mickey Mouse content at scale. (Before you object that Disney's in-house work will stand out for the quality and creativity of the writing, I invite you to watch any random episode of Mickey Mouse Clubhouse and ask yourself how much quality or creativity the typical mid-level Mickey project gets on Disney's watch.) We'll all be able to get into the Mickey business just by passing some text and JSON around and feeding them into models that are open-source and widely available.

Derivative Works

This scenario—a prompt, a seed, and some parameters go to a set of A.I. models, and a movie comes out the other end—is not obviously illegal under current law. It should be legally the same as you creating a picture of Mickey Mouse using Procreate or Photoshop and then hanging it in your bedroom: just a private, personal, noncommercial creation of some derivative works viewed only by the creator and not stored, transmitted, published, distributed, or profited from in any way.

In short, if I make a brand new picture (or video, 3D rendering, audio file, graphic novel, etc.) of Mickey Mouse on my laptop, and it stays on my laptop, then it really doesn't seem possible to argue that I've broken any laws, no matter what software I've used to create that picture.

So the legal arguments in the Stable Diffusion lawsuit go to a lot of effort to frame this scenario as illegal. This isn't the place to go into the plaintiffs' argument in detail, but the core of it is that the Stable Diffusion model weights file—the file that contains the trained neural network that Stable Diffusion uses to generate images—is itself a derivative work of all the billions of images in the training data, and that Stability AI has profited from this work without compensating the training data's copyright holders.

There are a lot of other moving parts to the technical and legal arguments laid out in the class action complaint, but this one, which pops up early in the document on page 3, is the most deadly to the entire project of generative A.I.: "'Derivative Work' as used herein refers to the output of AI Image Products as well as the AI Image Products themselves—which contain compressed copies of the copyrighted works they were trained on" (emphasis added).

If they can make this charge stick, then they have a shot at forcing tech platforms to give these model files the same treatment they currently give other forms of digital contraband: child pornography, pirated movies, cracked software, malware, 3D-printed gun files, and so on. It would be very hard to host such a model file publicly, and anyone caught doing it could expect a takedown notice.

This then would be the end of generative A.I.—or, at least, of generative A.I. in any kind of decentralized form. Closed companies and models like OpenAI would still exist, because they're centralized, controllable, censorable, and willing and able to carefully filter what their users can and can't do with their products. Here in the U.S., and probably in any country with compatible copyright laws, all generative A.I. models would be locked safely behind APIs, and innovation in the field would slow down dramatically.

Disney would be able to work with Microsoft, OpenAI, Google, and other big tech platforms to use these large, closed-sourced models to replace the teams of artists they currently employ. They could fire most of their talent and replace them with A.I. that cheaply generates infinite new content from their vast catalog of existing intellectual property. Meanwhile, independent creators and noncommercial users would be prevented from using these same software tools to compete with the Disneys of the world—they'd be stuck in the era of making art the old-fashioned way.

That seems to be what the plaintiffs in this suit want. But I don't think they fully realize what it would mean for them if they left all the generative A.I. solely to Big Content.