Don't Let Disney Monopolize A.I.-Generated Art
The indie artists suing Stable Diffusion may not realize it, but they're doing the Mouse's dirty work.

Disney and the rest of Hollywood have been eerily quiet about the launch of Stable Diffusion, despite the fact that this open-source A.I. software will happily spit out high-quality images of iconic, copyrighted characters from comics, cartoons, and movies in response to text prompts.
But this doesn't mean they're sitting idly by and doing nothing. There's a legal battle taking shape, though it doesn't yet involve any of the larger players—at least not that we can see publicly. The first big lawsuit against Stability AI, the company that makes Stable Diffusion, is fronted by friendly "indie artist" faces who are put forth on a very slick website as fighters for a fair shake for independent creatives. Their suit takes direct legal aim at the core of how generative A.I. works.
Now Getty Images has joined the fray too, with a lawsuit against Stability AI that it told The Verge is mainly about seeking legal clarification and not so much about damages.
The mouse that ate the public domain is watching carefully, because what's at stake for it is existential.
To lay it out in terms that sound sci-fi but definitely are not: A future version of Stable Diffusion will not be a replacement for some artists whose work you like; it can replace entire studios and intellectual-property shops like Disney, Pixar, and Marvel. You'll have a movie studio's worth of creative and technical talent on your laptop, and it can keep you endlessly entertained with your favorite characters and worlds without ever sending a dollar to the owner of those characters' copyrights.
Here's the even bigger kicker: Under current law, none of this would be obviously illegal.
A Glimpse of the Future
It's easy to misunderstand the stakes in these fights if you think about generative A.I. solely in terms of who can do what with images of Mickey Mouse, Spider-Man, and other copyrighted characters and settings. Even if you've been following generative A.I. closely enough to understand that a key part of the fight is over the software's so-called style transfer abilities—the ability to mimic a particular artist's style well enough to produce an endless stream of novel works in it—you're still pulling on a single thread of a much vaster tapestry.
The potential risk to Disney and other large intellectual property holders is far graver than simply a flood of new user-generated images, memes, and video clips that are derivative of their copyrighted works and thus arguably reduce the value of the genuine article.
To see why, let's game this out.
Right now, if I give you a prompt, some parameters, and a seed number, you and I can both independently use Stable Diffusion to generate the exact same image, pixel for pixel. On a practical level, this means that with just a little bit of text, I can effectively "transmit" a very large image file to you. You might even say that if I've published that text (the prompt + seed + Stable Diffusion settings), I've published that image.
Now let's fast forward a few years to when full-scale text-to-video is in full flower. In this world, a single text prompt and seed combination might go into a ChatGPT clone, which would then produce a script, which would then get fed into a script-to-animation model, which would then produce a video.
If we both had access to the models required to make this work—let's say all the relevant models are fully open-source, like Stable Diffusion—then I could "publish" a feature-length cartoon by publishing an initial text prompt and seed combination along with the relevant workflow details and model settings. Anyone who had that text information and access to the models could then watch my cartoon.
Now imagine that this cartoon stars Mickey Mouse.
Is Disney going to sue me for publishing a few-hundred-character text prompt, an integer, and a handful of key/value pairs specifying model settings and workflow? That would be pretty absurd, even by Disney standards. It's hard to see the courts going along with it or anyone being able to enforce it if they did. We're talking about an amount of text that's probably so small I could circulate it as a Notepad.exe screen cap.
At the point that the above is feasible—a point that's coming far sooner than you can imagine—it's game over for Disney's ability to maintain its status as the world's exclusive provider of novel Mickey Mouse content at scale. (Before you object that Disney's in-house work will stand out for the quality and creativity of the writing, I invite you to watch any random episode of Mickey Mouse Clubhouse and ask yourself how much quality or creativity the typical mid-level Mickey project gets on Disney's watch.) We'll all be able to get into the Mickey business just by passing some text and JSON around and feeding them into models that are open-source and widely available.
Derivative Works
This scenario—a prompt, a seed, and some parameters go to a set of A.I. models, and a movie comes out the other end—is not obviously illegal under current law. It should be legally the same as you creating a picture of Mickey Mouse using Procreate or Photoshop and then hanging it in your bedroom: just a private, personal, noncommercial creation of some derivative works viewed only by the creator and not stored, transmitted, published, distributed, or profited from in any way.
In short, if I make a brand new picture (or video, 3D rendering, audio file, graphic novel, etc.) of Mickey Mouse on my laptop, and it stays on my laptop, then it really doesn't seem possible to argue that I've broken any laws, no matter what software I've used to create that picture.
So the legal arguments in the Stable Diffusion lawsuit go to a lot of effort to frame this scenario as illegal. This isn't the place to go into the plaintiffs' argument in detail, but the core of it is that the Stable Diffusion model weights file—the file that contains the trained neural network that Stable Diffusion uses to generate images—is itself a derivative work of all the billions of images in the training data, and that Stability AI has profited from this work without compensating the training data's copyright holders.
There are a lot of other moving parts to the technical and legal arguments laid out in the class action complaint, but this one, which pops up early in the document on page 3, is the most deadly to the entire project of generative A.I.: "'Derivative Work' as used herein refers to the output of AI Image Products as well as the AI Image Products themselves—which contain compressed copies of the copyrighted works they were trained on" (emphasis added).
If they can make this charge stick, then they have a shot at forcing tech platforms to give these model files the same treatment they currently give other forms of digital contraband: child pornography, pirated movies, cracked software, malware, 3D-printed gun files, and so on. It would be very hard to host such a model file publicly, and anyone caught doing it could expect a takedown notice.
This then would be the end of generative A.I.—or, at least, of generative A.I. in any kind of decentralized form. Closed companies and models like OpenAI would still exist, because they're centralized, controllable, censorable, and willing and able to carefully filter what their users can and can't do with their products. Here in the U.S., and probably in any country with compatible copyright laws, all generative A.I. models would be locked safely behind APIs, and innovation in the field would slow down dramatically.
Disney would be able to work with Microsoft, OpenAI, Google, and other big tech platforms to use these large, closed-sourced models to replace the teams of artists they currently employ. They could fire most of their talent and replace them with A.I. that cheaply generates infinite new content from their vast catalog of existing intellectual property. Meanwhile, independent creators and noncommercial users would be prevented from using these same software tools to compete with the Disneys of the world—they'd be stuck in the era of making art the old-fashioned way.
That seems to be what the plaintiffs in this suit want. But I don't think they fully realize what it would mean for them if they left all the generative A.I. solely to Big Content.
Editor's Note: As of February 29, 2024, commenting privileges on reason.com posts are limited to Reason Plus subscribers. Past commenters are grandfathered in for a temporary period. Subscribe here to preserve your ability to comment. Your Reason Plus subscription also gives you an ad-free version of reason.com, along with full access to the digital edition and archives of Reason magazine. We request that comments be civil and on-topic. We do not moderate or assume any responsibility for comments, which are owned by the readers who post them. Comments do not represent the views of reason.com or Reason Foundation. We reserve the right to delete any comment and ban commenters for any reason at any time. Comments may only be edited within 5 minutes of posting. Report abuses.
Please
to post comments
A game of cat and mouse.
The trap has been set.
Now imagine that this cartoon stars Mickey Mouse.
Look, I like mouse porn as much as the next guy, but I really can't imagine watching that much cartoon sex even if it does involve Pinocchio, the Little Mermaid, and the Three Crows from Song of the South.
Pinocchio, the Little Mermaid, and the Three Crows from Song of the South.
Go on...
... walk into a bar ...
Three Crows from Song of the South.
Well, I seen a horse fly.
I seen a dragon fly!
I seen a house fly.
Hey, I seen all that, too!
I seen a peanut stand, heard a rubber band
I seen a needle that winked its eye
But I be done seen ’bout ev’rything…
When I see three crows in Song Of The South.
A Zippity-Doo-Dah-less fuck
The Little Spermaid
Tell Kammy that it's a Venn diagram. She'll go to war for it.
Mickey's face and two ears make the three circles Kammy enjoys so much.
Somehow I doubt courts are going to throw the duck test out the window.
I expect the future of videos to be different, but the same.
Hollywood is too damned expensive. In a year, 10 years, a 100 years, most videos will be distributed as scripts played by real-time generators. You will be able to recast these videos with actors of your choice. Scripts will come with generic (free) actors. Current (expensive) actors will be cast only for the big hits. Much more common will be old actors out of copyright -- Cary Grant, Lucille Ball, The Three Stooges.
And the expensive actors will gradually fade away as people realize they can create and sell any kind of actor, completely synthetic. You will be able to tune aspects of your choice -- comedic second takes, menacing accents, physical ability, you name it.
It's going to be fantastic.
As for those who insist acting is special and unique and artistic and can't be condensed down to zeroes and ones ... actual stage plays will always exist.
I can also see this technology being used to produce "fan fiction" of popular franchises. Anyone would be able to, say, produce their own version of a Star Wars sequel trilogy that hopefully doesn't suck. Or at least doesn't suck as much as The Mouse's. And all for a fraction of the cost of making a traditional movie. Not to mention having digitally produced "actors" that look like their real life counterparts.
You don't have to dream. That's already here. Not in video form (yet) but text and static image generation 100% already there. You can go to NovelAI right now and type "Star Wars sequel movie script" into the generator and it will begin spitting, line by line, a movie script at you. You will probably need to add a few more parameters and do some tweaking to make it a decent script, but the skeleton is there.
As a legal analyst, this guy might be a good (albeit disaffected, fringe-inhabiting) coder.
Was Reason.com genuinely unable to find a minimally qualified author?
Stokes oversells the ability of generative AI models. They are a long way away from producing something as complicated as a movie, with plot, dialogue, spoken and musical audio, etc. Right now their real utility value is in illustration, which they can do all day and all night effortlessly, and in copy, which they can also do with a high degree of rigor. However, they do these things without context and without comprehension, so sometimes they produce garbage. You still need a human editor to buff out the rough patches.
A movie, or even a novel, is a much harder thing to accomplish. It requires a high degree of comprehension and context, and just to take an example, text generation AIs are not capable of producing pages and pages of prose that logically flows from one idea to another. The best they can do is imitate writers right now, in small doses. They can probably generate some very bad poetry. They will not produce a movie script anytime soon, and it will be some time after THAT that they start making scripts better than humans do. But it really is just a question of time and resources. These systems can be scaled up and then anything is possible.
The true AI renaissance will not be happening until energy is much cheaper though, because that is the number one resource. AI will work tirelessly, so long as you supply it with electricity.
Yes, definitely oversells the ability for current generative models to create complete works of art. But the models are amazing tools for people to use to help generate art, as part of the process. Disney has the resources in a certainly working on training large models using only its copyrighted content, of which it actually has huge volumes of material that was created or filmed but never made it into the final product or into the internet. Disney may not fire all their artists right away, but they will be able to reduce them greatly when an artist can describe a very short sequence and have the model generate final content.
And for the open source models, I think the creators will start paying more attention to copyright, maybe use the copyright checking services to keep content out of their training processes, and out of the resulting model s
I'm very skeptical of these AI image generators. I've played with the one by the people behind chatGPT and now Stable Diffusion.
I don't think they are actually creating images. I think they are just doing an image search.
If you were looking for a way to earn some extra income every week… Look no more!!!! Here is a great opportunity for everyone to make $95/per hour by working in your free time on your computer from home… I’ve been doing this for 6 months now and last month i’ve earned my first five-figure paycheck ever!!!!
Learn more about it on following link………>>> http://www.smartcash1.com
If you think that, you don’t really understand how they work. They are generating images based on statistical “signatures” from other images, not returning images they have stored. The biggest weakness in the suit is that the lawyers share your (common) misunderstanding about how all this works. It shouldn't take long for some expert witnesses who actually understand this to dispose of the suit. The suit claims that Stable Diffusion stores compressed versions of the images is simply wrong as a matter of fact. They are almost there, but if it isn't storing some version of the content, it is very difficult to assert copyright violation.
Before you object that Disney’s in-house work will stand out for the quality and creativity of the writing…
BWAAHAHAHAHAHA!!!!!!!!!!!
Oh man, I couldn't even finish the rest of the article I was laughing so hard at that!
The network execubots are coming!
Indie artists are at risk of becoming obsolete. Videos are much harder to make though. Disney is under no threat from this.
In this world, a single text prompt and seed combination might go into a ChatGPT clone, which would then produce a script, which would then get fed into a script-to-animation model, which would then produce a video.
This seems an awful lot like the 'deep fake' panic. That is, on the one hand, Disney shouldn't be allowed to hem in generative AI. On the other hand, I feel like I'm reading the production notes for all of the MCU Phase 4+.
If a prompt, parameters, and seed are all it takes to generate a duplicate of a work, then all you really have is a complicated, obfuscated compression system which means that the "prompt, parameters, and seed" are just the same as the bits in a video file on your computer - a set of data that when run through the proper decryption/decompression displays a work - which means it would fall under the same copyright laws that currently don't allow you to freely distribute those video files if they contain works you do not have the rights to distribute.
Parametric compression, just like a vocoder.
Not quite. You cannot compress input with it, only give a way to regenerate output.
They don't allow you to generate a duplicate of any of the works in the training data. They would allow you to generate a duplicate of an output of the model. That's a BIG difference and means that this isn't a compression system for input: You cannot give it an image and say “give me the information to reproduce this”. You can say “give me the information to generate the same thing you just generated”, but that isn’t a violation of copyright for any input data.
When I'm feeling mean, I kind of wish we had a viable AI in the creative/knowledge space, because I do admit getting a tiny smirk of satisfaction when the laptop learn-to-code class starts worrying about their jobs.
so this is why Pelosi wanted to pay the artists to stay home
That's an aspect I hadn't thought of before, thanks. Huge companies will be able to buy/develop closed generative AI to make stuff based on their own IP, reducing their human staff. But independent creators won't be able to use the open generative AI to make works not based on existing IP. Yuck.
By the way, anybody who hasn't read The Great Automatic Grammatizator, a short story by Roald Dahl (his adult short stories are amazing, delightfully twisted), should do so.
https://en.wikipedia.org/wiki/The_Great_Automatic_Grammatizator
Disney will not be getting involved in this because there is not reason to. If someone tries to make money of disney property they will do the same thing they will do any other artist that tries. This is just like cab drivers' fight against uber and lyft. It's not going to work. You have no case. AI is learning from previous art. Guess what. So do all other artists. If you are going to stop AI from learning then you are going to have to stop all artists from using any previous art as a learning tool or inspiration. The question is does Artificial intelligence have the same rights as human intelligence.
If you were looking for a way to earn some extra income every week… Look no more!!!! Here is a great opportunity for everyone to make $95/per hour by working in your free time on your computer from home… I’ve been doing this for 6 months now and last month i’ve earned my first five-figure paycheck ever!!!!
Learn more about it on following link………>>> http://www.smartcash1.com
I'm not worried about it, because in the end there's nothing that laws can do about it. You can't legislate an AI model into compliance. Just look at how AIDungeon suffered when they tried to remove the literal child porn from their text generation model. They didn't intend to put it in there (they're fucking mormons), but end up in there it did, and as soon as people realized their lobotomized AI was useless they moved over to other AIs. It's important to note that Stable Diffusion isn't the only AI model out there, and even if it was it doesn't exist in a single place where you can just take it down, and as long as the files sit somewhere attached to the internet anyone anywhere can use them to produce material which cannot be definitively traced back to it. If the feds bust down your door and find a picture of Iron Man on your computer, what can they do?