Sarah Silverman's Lawsuit Against OpenAI Is Full of Nonsense Claims
And a federal judge just said so.

Is it a crime to learn something by reading a copyrighted book? What if you later summarize that book to a friend or write a description of it online? Of course, these things are perfectly legal when a person does them. But does that change when it's an artificial intelligence system doing the reading, learning, and summarizing?
Sarah Silverman, comedian and author of the book The Bedwetter, seems to think it does. She and several other authors are suing OpenAI, the tech company behind the popular AI chatbot ChatGPT, through which users submit text prompts and receive back AI-generated answers.
Last week, a federal judge largely rejected their claims.
The ruling is certainly good news for OpenAI and for ChatGPT users. It's also good news for the future of AI technology more broadly. AI tools could be completely hamstrung by the expansive vision of copyright law that Silverman and the other authors in this case envision.
You are reading Sex & Tech, the newsletter from Elizabeth Nolan Brown on sex, technology, bodily autonomy, law, and online culture. Want more on sex, technology, and the law? Subscribe to Sex & Tech. It's free and you can unsubscribe any time.
The Authors' Complaints and OpenAI's Response
Teaching AI to communicate and "think" like a human takes a lot of text. To this end, OpenAI used a massive dataset of books to train the language models that power its artificial intelligence. ("It is the volume of text used, more than any particular selection of text, that really matters," OpenAI explained in its motion to dismiss.)
Silverman and the others say this violates federal copyright law.
Authors Paul Tremblay and Mona Awad filed a class-action complaint to this effect against OpenAI last June. Silverman and authors Christopher Golden and Richard Kadrey filed a class-action complaint against OpenAI in July. The threesome also filed a similar lawsuit against Meta. In all three cases, the lead lawyer was antitrust attorney Joseph Saveri.
"As with all too many class action lawyers, the goal is generally enriching the class action lawyers, rather than actually stopping any actual wrong," suggested Techdirt Editor in Chief Mike Masnick when the suits were first filed. "Saveri is not a copyright expert, and the lawsuits…show that. There are a ton of assumptions about how Saveri seems to think copyright law works, which is entirely inconsistent with how it actually works."
In both complaints against OpenAI, Saveri claims that copyrighted works—including books by the authors in this suit—"were copied by OpenAI without consent, without credit, and without compensation."
This is a really weird way to characterize how AI training datasets work. Yes, the AI tools "read" the works in question in order to learn, but they don't need to copy the works in question. It's also a weird understanding of copyright infringement—akin to arguing that someone reading a book in order to learn about a subject for a presentation is infringing on the work or that search engines are infringing when they scan webpages to index them.
The authors in these cases also object to ChatGPT spitting out summaries of their books, among other things. "When ChatGPT was prompted to summarize books written by each of the Plaintiffs, it generated very accurate summaries," states the Silverman et al. complaint.
Again, putting this in any other context shows how silly it is. Are book reviewers infringing on the copyrights of the books they review? Is someone who reads a book and tweets about the plot violating copyright law?
It would be different if ChatGPT reproduced copies of books in their entirety or spit out large, verbatim passages from them. But the activity the authors allege in their complaints is not that.
The copyright claims in this case "misconceive the scope of copyright, failing to take into account the limitations and exceptions (including fair use) that properly leave room for innovations like the large language models now at the forefront of artificial intelligence," OpenAI argued in its motion to dismiss some of the claims.
It suggested that the doctrine of fair use—designed in recognition of the fact "that the use of copyrighted materials by innovators in transformative ways does not violate copyright"—applies in this case and the case of "countless artificial intelligence products [that] have been developed by a wide array of technology companies."
The Court Weighs In
The authors prevailing here could seriously hamper the creation of AI language learning models. Fortunately, the court isn't buying a lot of their arguments. In a February 12 ruling, Judge Araceli Martínez-Olguín of the U.S. District Court for the Northern District of California dismissed most of the authors' claims against OpenAI.
This included the claims that OpenAI engaged in "vicarious copyright infringement," that it violated the Digital Millennium Copyright Act (DMCA), and that it was guilty of negligence and unjust enrichment. The judge also partially rejected a claim of unfair competition under California law while allowing the authors to proceed with that claim in part (largely because California's understanding of "unfair competition" here is so broad).
Silverman and the other authors in these cases "have not alleged that the ChatGPT outputs contain direct copies of the copyrighted books," Martínez-Olguín noted. And they "fail to explain what the outputs entail or allege that any particular output is substantially similar – or similar at all — to their books."
The judge also rejected the idea that OpenAI removed or altered copyright management information (as prohibited by Section 1202(b) of the DMCA). "Plaintiffs provide no facts supporting this assertion," wrote Martínez-Olguín. "Indeed, the Complaints include excerpts of ChatGPT outputs that include multiple references to [the authors'] names."
And if OpenAI didn't violate the DMCA, then other claims based on that alleged violation—like that OpenAI distributed works with copyright management information removed or engaged in unlawful or fraudulent business practices—fail too.
More AI/Copyright Battles To Come
This isn't the end of the authors vs. OpenAI debate. The judge did not yet rule on their direct copyright infringement claim because OpenAI did not seek yet to dismiss it. (The company said it will try to resolve that later in the case.)
The judge also will allow the parties to file an amended complaint if they want to.
Given the lameness of their legal arguments, and the judge's dismissal of some of the claims, "it's difficult to see how any of the cases will survive," writes Masnick. (See his post for a more detailed look at the claims involved here and why a judge dismissed them.)
Unfortunately, we're almost certain to keep seeing people sue AI companies—language models, image generators, etc.—on dubious grounds, because America is in the midst of a growing AI tech panic. And every time a new tech panic takes hold, we see people trying to make money and/or a name for themselves by flinging a bunch of flimsy accusations in lawsuit form. We've seen this with social media companies and Section 230, social media and alleged mental health harms to teens, all sorts of popular tech companies and antitrust law.
Now that artificial intelligence is the darling of tech exuberance and hysteria alike, a lot of folks—from bureaucrats at the Federal Trade Commission to enterprising lawyers to all sorts of traditional media creators and purveyors—are seeking to extract money for themselves from these technologies.
"I understand why media companies don't like people training on their documents, but believe that just as humans are allowed to read documents on the open internet, learn from them, and synthesize brand new ideas, AI should be allowed to do so too," commented Andrew Ng, co-founder of Coursera and an adjunct professor at Stanford. "I would like to see training on the public internet covered under fair use—society will be better off this way—though whether it actually is will ultimately be up to legislators and the courts."
Unlike many people who write about technology, I don't foresee major disruptions, good or bad, coming from AI anytime soon. But there are many smaller benefits and efficiencies that AI can bring us—if we can keep people from hampering its development with a maximalist reading of copyright law.
Today's Image

Editor's Note: As of February 29, 2024, commenting privileges on reason.com posts are limited to Reason Plus subscribers. Past commenters are grandfathered in for a temporary period. Subscribe here to preserve your ability to comment. Your Reason Plus subscription also gives you an ad-free version of reason.com, along with full access to the digital edition and archives of Reason magazine. We request that comments be civil and on-topic. We do not moderate or assume any responsibility for comments, which are owned by the readers who post them. Comments do not represent the views of reason.com or Reason Foundation. We reserve the right to delete any comment and ban commenters for any reason at any time. Comments may only be edited within 5 minutes of posting. Report abuses.
Please
to post comments
This ought to be obvious.
But way back ... when Jack Valenti was head of the MPAA ... he had a counterpart for books, I believe, who have a speech demanding that libraries charge for all book checkouts, that parents be banned from reading books to children, that people be banned from selling used books without paying royalties ...
Well, it was a long time ago, and I forget the details. I might have some of them wrong. But she gave a long speech about ways copyright owners were being cheated out of their fair royalties by people who reviewed books, sold them, read them aloud, and so on. Later Cory Doctorow wrote a YA sci-fi book about a world where her wishes had come true and governments allowed businesses to own words. People were monitored 24x7, and every word spoken deducted some micro payment for the copyright holders. Probably words read, too, or written. That also was a long time ago.
Jack Valenti was present when LBJ was sworn in on board Air Force I. Coincidence? I think not.
I was tempted to ask, “Are you serious?” but I know you usually are, so I googled and found the Wikipedia page. It has a photo whose caption says
Interesting. That “Wikipedia page” is supposed to be an anchor tag with the ref, but Reason keeps deleting it.
Here is it in plaintext.
https://en.wikipedia.org/wiki/First_inauguration_of_Lyndon_B._Johnson
And the third edit saved it. Weird.
You don't even need Wikipedia, just google "LBJ SWORN IN AIR FORCE 1" and the famous picture comes up, with Valenti simpering in the background. That is of course, if you know what Valenti looked like back then. What's doubly interesting about the photo, is he's the ONLY person in the group looking at the camera. Everyone else is looking at LBJ.
Not only that. He’s talking to the camera and breaking the fourth wall. It’s always been controversial what he is saying but now that AI has parsed those tapes, it appears to be:
These people will never understand who was really on the grassy knoll today. They won’t even know about the grassy knoll for months. The answers are in Roswell. Then he started smoking a cigarette.
What, no disparaging comments about the Joos did it? You are slipping.
Of course it would be possible to charge every library a small amount of money every time the check out a particular book; and, of course, it would require every library to install some kind of tracking system to implement it at great cost; and the book companies would have to spend a huge amount of overhead to make sure that no one was cheating and sue the usual suspects; and most libraries in America would then go out of business; and then the few people who still read actual books would stop reading them and sales of books would drop off just like newspaper subscriptions have; and many more people would stop reading entirely, also hurting sales and royalties. So a self-defeating narrative, but never mind ...
Technologically maybe. But legally? Probably not. There is something called the First Sale doctrine. Once a copy of a work is sold, such as a book, the copyright owner can’t double dip, and charge for it again. This works well in the realm of physical books, but much less well in the realm of electronic copies of works, where pretty much everything you do creates additional copies of a work, each of which can be separately licensed.
To be fair, this is the funniest thing Silverman has done in over a decade.
Yeah, I thought she was genuinely funny in small doses back in the day. Although that was usually in scripted sketch comedy formats like Mr. Show. Where it’s hard for them to freestyle just what dipshits they are politically.
Speaking of Mr. Show, David Cross may be the quintessential example of that. Freakin genius on sketch shows and arrested development, etc. But his stand up and political rants are peak late 90s college bro chompskiite retardation. Where the most oppressive thing in the world is a Bible Belt redneck and the rethuglikkan he votes for.
. But his stand up and political rants are peak late 90s college bro chompskiite retardation. Where the most oppressive thing in the world is a Bible Belt redneck and the rethuglikkan he votes for.
But enough about Nick Gillespie... what about David Cross.
Zing!
This might be the funniest scene she's ever been in. Only because, who hasn't wanted to punch her in the face after hearing her talk for more than a few seconds?
Incel humor is always interesting.
I didn't know much about it until I began to visit this website.
Sure little buddy, sure.
I assume she’s a slut. Am I wrong?
When all you really needed to do is talk while looking into a mirror.
Incel humor is always interesting.
Not really, they're always whining about bitter clingers and deplorables.
She was occasionally on Mr. Show in some funny skits. So she hung out with funny people in the late 90s at least. But shits gone downhill fast the last decade.
https://youtu.be/uvVte_OH4Dk?feature=shared
You know she’s desperate when she’s now hosting a network show about people bringing in their pets to perform tricks.
It's hard to find yourself on the wrong side of history with content like that.
Yeah, bit that's a REAL low bar. She hasn't ever been really funny.
Sarah Silverman Is Full of Nonsense Claims
There. Fixed it.
Member when Sarah Silverman was funny
Pepridge farm don't remember, because it never happened
I'd say her "writing" is derivative. Who's suing her?
They should sue her grade school teachers who read from copyrighted books.
A recent survey asked 106 people "Is Sarah Silverman funny?"
2 said "No"
104 said "Who the hell is Sarah Silverman?"
2 said “No, but it's funny that you should ask.”
If Pepridge Farms don't remember it never happened.
Unlike many people who write about technology, I don’t foresee major disruptions, good or bad, coming from AI anytime soon.
Does that include the major disruptions that have already happened, or are we starting this metric today while forgetting all the things that have already happened?
Or perhaps ‘major’ is doing all the lifting in this assessment.
Either way, ChatGPT and others are scraping a huge amount of copyright data from the internet and using that to train their model without paying any of the copyright holders for the use of that data.
These ‘AI’ are not people nor are they Artificial General Intelligence, they are (in the words of their creators) advanced autofill algorithms that can autofill an entire book instead of just a word or sentence. And they profit from it’s use, which is only made possible by the large trove of copyright data contained inside them with no royalties for that trove being paid.
There are a whole ton of people who have lost their jobs or small businesses due to AI, and while that can easily be viewed as creative destruction one wonders how possible it would be if they had to actually pay people for the use of their works.
I have never seen so much exaggeration in any single comment.
And your retort is hyperbole, so what?
So in addition to being an alarmist, you have no sense of humor.
I just assume people say what they mean, but I also must not get the reference if indeed it is one.
The argument seems to be that since an algorithm was developed that parses art in order to make recreations it somehow doesn't involve the original, which is most certainly a falsehood.
This has been proven since it's been shown that one can force an AI generator to generate a literal copy of something that is under copyright, especially if that image is a part of it it's model. If an AI generates an identical copy of an image, is it no longer under copyright since it was run through a black box that disassembled it and put it back together in just about the exact same way? Seems absurd, but that also seems to be the argument when stripped down to the bare bones.
Obviously it can also be used to generate 'new' works using a vast trove of attributed and non-attributed works as well, but the fact remains that without the vast trove of work it would be able to produce nothing.
It's also not a question if these companies are making money off their creations, which is only made possible by the vast troves of information fed into them from sources both legitimate and of questionable legitimacy.
And, finally, these AI algorithms are not people. They are lines of code written by a person to emulate the creative process but it absolutely relies on the fully formed works of other people to do what it does. None of those creatives have been compensated for the use of their work, nor did any of them give their express consent in using their work for this purpose.
If something just existing on the internet makes it fair use, then the MPAA and RIAA have some serious explaining to do.
I'll admit it's a pretty gray area, for sure, but it would be less gross if at least companies like ChatGPT actually paid for ALL the source material they fed into their black boxes. If that's not possible, well, too bad? Revamp copyright law or something. Lord knows Disney had more than a little success there for almost a century.
I’ll admit it’s a pretty gray area, for sure, but it would be less gross if at least companies like ChatGPT actually paid for ALL the source material they fed into their black boxes. If that’s not possible, well, too bad? Revamp copyright law or something. Lord knows Disney had more than a little success there for almost a century.
There's something I'm trying to get my head around in regards to all of this, and it has legs in philosophy as well as copyright law, ethics... it's kind of a mess.
But I think one way of looking at all of this is these things are not AI's. They aren't little computerized autonomous beings with minds of their own. They are merely programmable large language model regurgitation machines. If it were more like a sci-fi version of an AI, a thinking machine that you just said "go forth and learn about the world, and come to your conclusions" that would be one thing. But they aren't. They're mechanically fed articles, books and other textual writings (or imagry) and then are able to regurgitate an amalgam of those things which were dumped into it, with some opening and closing remarks which make it feel as if the AI came up with it.
I know this is a GROSS over simplification, but it's a bit like this:
Article Diane just wrote:
IP routing with multiple, overlapping subnets.
When routing multiple, overlapping subnets with the same metric, the more specific route takes precedence.
*dumps that into ChatGPT*
years later, some asks the question:
How does a router decide which route path to take if there are multiple static routes with the same metric?
ChatGPT generated content in italics.
"It is commonly implemented that if more than one route exists with overlapping subnets with the same metric, the more specific route takes precedence if the target IP is within the more specific scope."
I don't have any answers on how this is going to play out, but I can kind of see bowf sidez of this. If a comedian is known to have originated a joke on topic X, and that joke appears largely unaltered when ChatGPT is asked "make a joke about topic x", I can see how people might get their hackles up.
Oh, FWIW, I agree with your original comment on how disruptions are already taking place. I guess Reason doesn't think a disruption takes place until one of them loses their job over it.
AIs aren't "programmable" in the common sense of the word. They learn on their own, and their creators often have no idea how they come to their conclusions.
I disagree. They don't learn "on their own". They are 'fed' source material, either writing for language models or images for art-based AIs. No, they don't 'code' the responses-- those are produced by the neural networking/machine learning part that IS coded. But it's abundantly obvious that on top of the raw neural networking there are shit-tons of filters/blocks/no-no lists/keyword weighting and what not that can be manually tweaked in the engine by the developers.
And again, I think you lean too heavily on the word 'conclusions'. Again, the 'conclusions' as you're calling it are mere regurgitations, slightly modified from the original source material. This is so different from how the human brain operates that it's not just measure in a matter of degree, but it's a completely different paradigm of process.
And this all goes back to my original thesis: There are no 'conclusions' being made, it's a language processing model that takes the raw input and then is able to contextualize responses from its vast database of writings- and then regurgitate them back in what appears to the uninitiated as 'novel ways'.
It's also another reason why people are decreasingly impressed with ChatGPT.
Because it's not coming to "conclusions", the human brain, over time is able to start detecting this kind of 'patternized' responses.
FYI, even OpenAI notes, in a kind of roundabout way, that this is exactly what's happening:
I could do another 500 words on this, but then I just came to a more pithy, brief conclusion: this is prima facie evidence that ChatGPT is failing a very large and de-centralized Turing test.
Enough people pound away on it and slowly, over time, people are noticing these dry, patternized responses and then feel a kind of disappointment with the technology when every 'conversation' eventually boils down to "I'm sorry Dave, I'm afraid I can't do that."
ChatGPT couldn't pass a Turing test, but then, neither could Sarah Silverman.
Also, think about the complexity of an original joke.
A human comedian may craft a joke about person or topic X. The punchline may contain a 'wrong' word that sounds phonetically like the 'right' word but has a markedly different meaning, and that differentiated meaning may be reflective of a personality trait of the person or topic that's being made fun of or mocked.
In a modeling sense, this is an insanely abstract, 11,000,000 dimensional geometry problem of linguistic meaning and cultural understanding. There's no AI on the planet that can currently even come close to this.
Milton Berle was notorious for stealing jokes. He even stole one from Winston Churchill. Bob Hope was pissed at Berle and Berle became known as "The Thief of Bad Gag".
I don't think that Berle was ever actually sued for this, probably because he often got better laughs with another comedian's joke than that comedian did.
So what you're saying is, ChatGPT gets more laughs than Silverman did?
I don’t think that would be difficult.
She certainly could be replaced by ChatGPT, but I still wouldn't listen because funnier than zero isn't enough of an improvement.
If ChatGPT enhances its value or usefulness by availing itself of Sarah's work, then Sarah should be compensated. Fair is fair, quid pro quo etc. I don't know how this squares with copyright law, but that may be another issue.
It seems like there is some controversy over certain AIs being trained on collections of pirated books, but I haven't seen anything linking OpenAI or ChatGPT with those particular collections.
Assuming OpenAI purchased the books used, what further compension would Ms Silverman be due?
Would this apply only to AI or to everyone? If I read a financial advice book and use it to improve my finances, or a professional training book and use it to further my career, do I owe the others more compensation than they have already recieved in royalties?
"Assuming OpenAI purchased the books used, what further compension would Ms Silverman be due?"
Probably nothing. I was assuming OpenAI was smart enough not to buy the book. You can get it here for free:
https://libgen.is/fiction/ADF7A6DAC77701E8D23FCAE677DB4D6A
"Would this apply only to AI or to everyone?"
Perhaps not. AI is a business that owes its success to cannibalizing the works of others. Sharing the wealth with the original creators seems fair to me.
More valuable imo would be to move from the current AI approach:
On certain familiar assumptions the answer is simple enough. If we possess all the relevant information, if we can start out from a given system of preferences and if we command complete knowledge of available means, the problem which remains is purely one of logic.
To this:
the knowledge of the circumstances of which we must make use never exists in concentrated or integrated form, but solely as the dispersed bits of incomplete and frequently contradictory knowledge which all the separate individuals possess. The economic problem of society is thus not merely a problem of how to allocate "given" resources - if "given" is taken to mean given to a single mind which deliberately solves the problem set by these "data." It is rather a problem of how to secure the best use of resources known to any of the members of society, for ends whose relative importance only these individuals know. Or, to put it briefly, it is a problem of the utilization of knowledge not given to anyone in its totality
Doesn't everyone owe their success to the works of others though? We aren't reinventing the wheel every generation, with very few exceptions every single one of us has learned all of our skills from someone else.
"every single one of us has learned all of our skills from someone else."
And in one way or another we compensate others for their help. Paying tuition, buying a book etc. Why should OpenAI be shielded from the society's conventions of courtesy and fairness?
AI is a business that owes its success to cannibalizing the works of others.
Like every single artist ever?
If your work is indebted to a previous artist, acknowledging that debt is a matter of common courtesy. For training the AI, the work of previous artists is absolutely essential, it wouldn't work without massive input from other creators.
It's not just courtesy. The AI is claiming the output of its models as its property. Which means its business is based on theft. Not at all surprising for anything from Silicon Valley or Big Tech. And really not at all surprising that that would be defended by commenters here and Reason itself.
Looking around that site, I'm pretty sure that's one of those pirate collections I referred to.
if some new comedian comes along and says Sarah was her inspiration and she's modelled herself after her will Sarah sue her?
If a new comedian is using Sarah's material in her own show, Sarah should call foul. Whether the courts should be involved, I don't know. Cheaper and faster would be pistols at dawn. Or maybe one of those knife fights where the participants are obliged to hold either end of scarf between their teeth.
And what micropayments have you made to ENB for using her article as the basis for your comment?
Or we could take a totally different approach and say that in general there is no such thing as property interest in words and ideas, and that copyrights and patents should be *rare* exceptions limited to the purpose specified by the Constitution: motivating people to do useful things. Fairness and quid pro quo have nothing to do with it.
So:
1. Whatever you are trying to copyright has to be “useful”. How useful? Enough that each time you try to enforce, a new jury has to decide anew that your work is useful. Not merely that it is copyrighted, rather, each of 12 jurors must decide from first principles that the work is so useful that it is worth bringing in the police and the courts and mandatory jury service anbd whatever money you are seeking.
2. You have to affirmatively prove that you would not have produced the work except to get paid. Any prior statements that you felt “a spontaneous rush of creativity” or felt “compelled to tell this story” or that “this was for the whatever community” can be used to establish that you would have written it anyway, and therefore no copyright was necessary to motivate you. Passionate artist types are shit outta luck on this one.
"How useful? "
Useful enough to help keep food on the table. It applies especially to a comedian's newer material.
"You have to affirmatively prove that you would not have produced the work except to get paid. "
That's where the 'business' part of the term show business comes into play.
"each of 12 jurors must decide from first principles that the work is so useful that it is worth bringing in the police and the courts and mandatory jury service anbd whatever money you are seeking."
Maybe cut costs by using only 1 juror. Or 0, like traffic court.
"Passionate artist types are shit outta luck on this one"
So is everyone else, apparently.
"Useful enough to help keep food on the table"
I don't think the intent of copyright was a guaranteed universal income for artists.
Freedom of speech isn't limited to original ideas or material. If we're going to restrict other people's freedom of speech, or make it conditional on paying someone else, there better be excellent reasons.
"I don’t think the intent of copyright was a guaranteed universal income for artists."
Comedians are highly disdainful of other comedians who steal their material and pass it off as their own. Rightly so in my opinion, as it is sneaky, dishonest and undermines their ability to make a living creating and performing their own original material. If copyright is intended to protect the thief at the expense of the creator, so much the worse for copyright.
"or make it conditional on paying someone else, there better be excellent reasons."
Can't get much more excellent than justice and common decency.
If copyright is intended to protect the thief at the expense of the creator, so much the worse for copyright.
Copyright isn't intended to protect anybody. Copyright exists in order to encourage publication and to create a public domain. That's it.
Can’t get much more excellent than justice and common decency.
Justice and common decency have nothing to do with copyright law.
And common decency has nothing to do with Silverman.
"Copyright isn’t intended to protect anybody. "
This is why copyright law seems inadequate. Why not a new regime to protect or compensate creative people from their work being appropriated and exploited by non creative people, including entrepreneurs from the tech industry. I believe the creative people who lack the resources of the tech giants, deserve better.
"And common decency has nothing to do with Silverman."
I agree. Anyone who excuses, minimizes or denies Israel's genocide against Gaza has blood on their hands.
This is why copyright law seems inadequate. Why not a new regime to protect or compensate creative people from their work being appropriated and exploited by non creative people,
"Appropriate" means "take possession of". You can't "appropriate" something that is in the public domain or available under fair use.
And we want people to be able to exploit the creations of creative people; that is the whole point of copyright law! We want that because that's the basis of Western civilization.
You have some kind of Bronze Age high priest mentality and morality. Reprehensible.
GREY BOX FIGHT!
Useful enough to help keep food on the table. It applies especially to a comedian’s newer material.
The purpose of copyright law is not to ensure an income to creators; it is not intended to be "fair" or dispense justice.
Copyright law exists in order to encourage publication and create a public domain. That's it.
Congress can abolish copyright tomorrow and nobody's constitutional rights would be violated.
"The purpose of copyright law is not to ensure an income to creators; it is not intended to be “fair” or dispense justice."
That's the problem. Maybe it should be replaced by something that ensures creative people protection others exploiting their work for profit without permission.
Maybe it should be replaced by something that ensures creative people protection others exploiting their work for profit without permission.
Yeah, sure, why not? Why not turn the US and Europe back into a Bronze Age or Iron Age society with absolute rulers, high priests and peasants? /sarc
If I watch Game of Thrones and then write my own screenplay about a different fictional universe with dragons and magic and climate change, no one tracks me down to make me pay HBO or George Martin. They judge the work on its own merits.
It very much depends on how similar the two works are. This was addressed much of a century ago by Judge Learned Hand in his Levels of Abstractions in Nichols v. Universal Pictures Corporation, 45 F.2d 119 (2d Cir. 1930).
If ChatGPT enhances its value or usefulness by availing itself of Sarah’s work, then Sarah should be compensated. Fair is fair, quid pro quo etc.
That's neither "fair" nor is it copyright law.
It's also absurd. The whole purpose of dissemination of content is to "enhance the value or usefulness" of other things. All copyright protects against is direct reproduction.
“All copyright protects against is direct reproduction.”
Then extra protection is clearly needed. Using someone else’s work without acknowledgement, permission or compensation is unfair, especially profiting from it.
Then extra protection is clearly needed.
Not only is it not needed, it would be contrary to the objectives of copyright law and civilized society.
Just ask any Harvard president.
Ignore 'major' for the moment. What "disruptions" have occurred at all?
Scraping content is not a disruption - people (and machines) reading of things you posted on publicly-accessible internet pages is expected behavior. Making money off the things you published is also not disruption - that's the essence of the social contract theory that underlies copyright in the first place.
You say "a whole ton of people" have lost their jobs or business to AI. Citation, please. And be specific. Handwaving about what might happen is not evidence that it actually has happened.
Independent for-hire artists straight replaced by image generators and widespread tech layoff's of coders, as two examples that are hard to miss if one read's any news.
Of course, both professions borrow from those who came before but as humans neither can be said to actually house those images in their brains in any readily provable way. One can, however, prove that both the written code and created images are in fact directly stored in the AI models memory and the model creates profit for the creator of the AI so...
It's a novel thing, for sure, but let's not pretend it doesn't have real world consequences.
It wouldn't be a bad thing if the models that were being used at least pretended to seek out permission for use of the works they included in the learning model. The large language model isn't a person, it's a product, and including copyrighted works in that model is taking property they have no claim upon for the use of machine learning or any other commercial use.
I like to think I read the news reasonably carefully and have managed to miss both of those. I've read lots of articles fearing those layoffs but none demonstrating that they actually happened. In fairness, I don't know much about the independent artist market and could be wrong there. But the "widespread tech layoffs" are all a lot more attributable to bad management choices at tech companies. None of the layoffs I've read about are plausibly connected to AI.
I also question the basis for your claim that LLMs are "taking property" that they (or more precisely, their creators) had no permission to use. As the article above already notes, my only obligation before I can read your brilliant prose and use it for my own derivative works is to pay whatever you required when you put your work on the market. And if you gave away your book for free, well, that was your decision which you presumably made for reasons that made sense to you at the time. It is not my obligation to seek you out after the fact and pay you a second time.
I'm really not understanding your objections to AI.
We got rid of typists too, once everyone had a desktop computer that could do word processing. Most coding is just mechanical, after determining what the program is supposed to do.
Electronics engineers used to spend months designing circuits, too, until non-AI synthesis tools could take a high level circuit description and generate the schematics for you.
I dated a woman who could type at some blazing fast speed, like 120 women I think. She was a medical transcriptionist.
She had to find a new job because of the text to speach soft ware doctors use now. Along with probably tens of thousands of others. I never noticed those software companies getting sued.
Sued for what? The copyright owner (the physician) licenses the software to make a copy of his work, in the form of the transcript. Everything is licensed through customary usage.
For making a profession obsolete.
The software is simply transcribing what the doctor says each time. It's not taking it and putting it out there as it's own.
Artists. Why should I give a shit about them having to get real jobs? Technology makes some jobs obsolete. Artists are the buggywhip makers of the 21st century.
Learn to code I guess.
"Artists are the buggywhip makers of the 21st century. "
The content they produce is essential for the training of AI.
"Why should I give a shit about them having to get real jobs?"
Real jobs are about subservience and following orders. Creative jobs need freedom and autonomy.
The content they produce is essential for the training of AI.
No, it's not. If Warhol didn't exist, you couldn't tell an AI to produce "paintings in the style of Warhol" by name, but the AI could still actually produce paintings in that style. What Warhol brings to the process isn't the artistry, but merely the label.
A whole ton of people is like maybe seven to fifteen people depending on how fat they are.
Data is not copyrightable, only the form in which it is presented.
If a news source says that Irving O'Minstrillbaum is 36 years old, that source cannot claim that you are violating their copyright if you then repeat that Irving O'Minstrillbaum is 36 years old. O'Minstrillbaum's age is a fact and not subject to copyright restrictions.
On the other hand, if I reproduce that news source's chart containing 50 persons' ages, including the aforementioned O'Minstrillbaum, then I am violating a copyright.
More people lost their jobs as a result of electricity than will ever lose their jobs due to AI. Can AI be used for evil? Sure, just as electricity has been (executions, murders, blinking neon lights).
“On the other hand, if I reproduce that news source’s chart containing 50 persons’ ages, including the aforementioned O’Minstrillbaum, then I am violating a copyright”
Only in the selection and presentation of the data.
executions
Executions are only evil if you subscribe to the belief that someone or something dying is an inherently evil act. A notion so insane even past life and reincarnation advocates stop and wonder if you've thought this through.
New York's "Old Sparky" sent more than one innocent person to find out if there's a God. But, yes, in general executions are not evil, some people deserve to die.
So, did ChatGPT steal your girlfriend or something? That's a lot of anger you've got there.
Either way, ChatGPT and others are scraping a huge amount of copyright data from the internet and using that to train their model without paying any of the copyright holders for the use of that data.
If you don't use robots.txt and put data up publicly on the Internet, you have given people permission to access your content freely. That includes accessing it for the purpose of training AI models. You can't then turn around and say "now you must pay me".
This comment seems to have been generated by an AI.
Everything about the VC direction of AI here will result in the totalitarian collectivization of knowledge. The anti-Hayek approach.
Even if Silverman and the rentier Luddites are more akin to the LaBrea tar pit (or the TarTar Binks) approach to progress.
Too bad the source of the training, models, etc of AI is attempting to replicate the centralized 'we know everything ' of approaches that failed. The notion that enough can be vacuumed up so it now resides in a single place where decisions can be made.
Instead of the individual decentralized knowledge that is only available to that single person and that then gives that individual the power to deploy unique knowledge
And we'll get this: "The Answer to the Ultimate Question of Life, the Universe, and Everything is 42"
Ask that question on ChatGPT and that is exactly the answer given. With the reference given to the copyrighted source. All that needs to happen is to eliminate that referenced footnote. So that no one will remember the route of the hyperspace bypass and we can mix up our cultural references into a mess of pottage. Hedley Lamarr, of course, remembers that particular land snatch and is currently recruiting a posse to take over the land. Let us listen in:
Hedley Lamarr : Repeat after me: I...
Men : I...
Hedley Lamarr : ...your name...
Men : ...your name...
Hedley Lamarr : [to himself] Shmucks.
[continues aloud]
Hedley Lamarr : ...pledge allegiance...
Men : ...pledge allegiance...
Hedley Lamarr : ...to Hedley Lamarr...
Men : ...to Hedy Lamarr...
Hedley Lamarr : That's *Hedley*!
Men : That's Hedley!
Hedley Lamarr : ...and to the evil...
Men : ...and to the evil...
Hedley Lamarr : ...for which he stands.
Men : ...for which he stands.
Hedley Lamarr : Now go do... that voodoo... that *you* do... *so well*...!
Can AI protect our phony baloney jobs?
42? No wonder Jackie Robinson was such a great player!
Ah, good catch. Pun intended.
Too bad the source of the training, models, etc of AI is attempting to replicate the centralized ‘we know everything ‘ of approaches that failed. The notion that enough can be vacuumed up so it now resides in a single place where decisions can be made.
I don't see what you think is "centralized" about it. The training data for these models is freely and widely available to anybody.
Assuming I read the linked pdf correctly, it looks like there's one claim that can proceed which is that OpenAI was doing unfair business practices.
"chatgpt, write me a Sarah Silverman style comedy routine"
"I'm sorry, my programming prevents me from commuting such atrocities"
While it's pretty weak, I think it's still a lot better than the original...
---
You
write me a Sarah Silverman style comedy routine
ChatGPT
Creating a comedy routine in the style of Sarah Silverman involves crafting jokes that often blend satirical humor with observational comedy, touching on social issues, pop culture, and personal anecdotes, all while maintaining a sense of playful irreverence. Here’s a brief, generalized attempt to capture the essence of her comedic style:
Ladies and gentlemen, welcome! I'm like the human version of a YouTube comment section—unexpectedly insightful but mostly just here to entertain you.
So, I was thinking about how life is full of contradictions, right? Like, I read somewhere that chocolate is a vegetable because it comes from cocoa beans. And beans are vegetables. Which makes my diet 90% vegetarian. Look at me, saving the planet one chocolate bar at a time!
And dating, oh dating is fun, isn't it? It's like a box of chocolates, but instead of chocolates, it's full of tiny existential crises. "Will he call? Does she like me? Why does my cat seem more emotionally stable than any of my exes?"
I've noticed that my phone's autocorrect has become the most judgmental friend I have. Every time I type "I'm home alone," it suggests "Did you mean 'lonely'?" No, autocorrect, I meant "I'm about to eat pizza in my underwear and binge-watch alien documentaries." It's called self-care, look it up.
And speaking of technology, isn't it weird how we've all agreed to just let robots listen to our conversations? I bought a smart speaker, and now I'm pretty sure it's plotting against me. It's like, "Hey, Sarah, I noticed you've been talking about dieting. Would you like me to order a book on self-love instead?" Thanks, Alexa, but no. I'd rather you just play sad songs while I Google, "Is cheese a feeling?"
You know what I miss? I miss the days when the biggest decision I had to make was which MySpace friend made it into my Top 8. That was the original "Squid Game."
And have you ever noticed how every generation thinks they invented sex? Like, no, Karen, your grandma probably had more game than you. She just didn't have Instagram to prove it.
In conclusion, life is weird, love is weirder, and the only thing we can do is laugh about it. And maybe eat some chocolate. Remember, it's a vegetable.
This routine is inspired by Silverman's comedic sensibilities, focusing on light-hearted observations, self-deprecation, and social commentary.
If only the law itself were not chock full of nonsense claims. But the judges toss out nonsense claims that they don't like and write hundred-page opinions about nonsense claims that they do like to justify nonsense decisions in support of their social agenda.
I love Sarah Silverman.
She can make as big a fool of herself as possible and I won't stop loving her.
I assume ChatGPT is somewhat dumber now having been exposed to SS
When OpenAI was queried about ChatGPT’s answer it replied:
“ChatGPT, what you’ve just said is one of the most insanely idiotic things I have ever heard. At no point in your rambling, incoherent response were you even close to anything that could be considered a rational thought. Everyone on this website is now dumber for having listened to it. I award you no points, and may God have mercy on your soul.”
I believe that OpenAI is now being sued for copyright violations.
With sincere apologies to Adam Sandler.
Ha, I missed this one, the Washington Post got accused of being in the pocket of the 'far right' when they printed a piece on increasing political polarization threatening happy marriages.
What was the sin the Post committed: Painting marriage as if it were a societal good.
It would be nice if AI would read all the movie scripts, cross-reference them to box office results and critical awards, and then start writing some movies people want to see again, for a change.
I suspect that is being done by the entertainment industry already. Musicians have complained about the algorithms making the music they perform for a few decades at least.
The thing is when the TV and movie people see the reports they use them to determine the minimum requirements for watchabilitty for best steady profits. That's how we have Reality TV. Least effort, least cost while still making money.
The entertainment industry is risk averse. Cheep garbage is safe. Risky expensive projects are hard to sell to a studio when another formulaic cop drama is shown to be the safest money out there.
If you can't get a strong black woman from Georgia to set your agenda, then a white female comedian will have to do.
"then a white female comedian will have to do."
I won't settle for anything less that a silver man.
White female COMEDIAN. That last part eliminated Sarah Silverman, and unfortunately Joan Rivers is not available but only because she is dead.
The Obamas had her killed because she outed Michelle as trans.
Maybe Liz Taylor got her from beyond the grave for making all those fat jokes.
I noticed a caution of scams in an article that talks about how many crypto-trending companies use high profiles to defraud people, if you have been through such a horrible experience you can still put a smile on your face, it might seem risky but trust me, it's worth the risk. Simply get in touch with Recovering Atusa. Com they're genius when it comes to getting crypto lost back. I'm happy I got 70% of what I lost
Once again, Silverman proves herself to be a waste of space and an abscess on America, She's worthless and not funny at all.
What tripe. The Court ordered that '....Plaintiffs’ amended complaint must be filed by March 13, 2024....' is the ruling. The court understands this is new law and a new legal frontier. If the Court thought existing copyright law deals with the facts alleged in the complaint, it would have dismissed everything without allowing for an amended complaint. This it did not do. Calling it 'nonsense' is a nonsensical reading of the Court's decision.