The Volokh Conspiracy
Mostly law professors | Sometimes contrarian | Often libertarian | Always independent
Defamation and Copyright
Several commenters asked: If AI companies don't have a copyright in works created by their programs, how can they be held liable under defamation law?
Defamation law and copyright law are two different bodies of law, aimed at serving different interests, and with different definitions. That's why people can routinely be sued for defamation even when their works aren't copyright protected. For instance,
[1.] The phrase "John Smith is a convicted child molester" is a short and simple phrase that's uncopyrightable. (Even if some highly creative short phrases might be copyrightable, the combination of a preexisting name and the preexisting "is a convicted child molester" phrase certainly isn't copyrightable.) Yet it could indeed be defamatory.
[2.] If Alan Author writes a libel of Paula Plaintiff (pro tip: never libel people whose names start with P), and Donna Defendant copies Alan's libel, that could indeed be defamatory, if Donna has the requisite mental state—even though the copyright is owned by Alan, not by Donna. (That's true regardless of whether Donna copied Alan's work with his permission, engaged in fair use, or infringed Alan's work.) This actually often happens, when Donna is a newspaper publisher who publishes Alan's op-ed. (Newspapers often publish op-eds by people who aren't their employees, without getting an assignment of copyright, but only a nonexclusive license to publish; in that situation, the copyright remains owned by the author, but the newspaper may be independently liable for defamation.)
[3.] If I spontaneously say "John Smith is a convicted child molester," and follow it up with five sentences of explanation, that oral statement not protected by copyright because it's not fixed in a tangible medium of expression. But it could be defamatory, assuming the elements of slander are satisfied.
In all these examples, the defendants are potentially liable, because defamation law cares about what the defendants communicated, and about whether it's false, reputation-damaging, said with the requisite mental state, and unprivileged. But the defendants aren't copyright owners, because copyright law is concerned with a very different thing (providing the incentive for creative expression fixed in a tangible medium). This doesn't itself tell us that AI companies are liable for their programs' uncopyrighted work, of course; I discuss that in much more detail here. But it does explain, I think, why copyright law has nothing to do with the question.
Editor's Note: We invite comments and request that they be civil and on-topic. We do not moderate or assume any responsibility for comments, which are owned by the readers who post them. Comments do not represent the views of Reason.com or Reason Foundation. We reserve the right to delete any comment for any reason at any time. Comments may only be edited within 5 minutes of posting. Report abuses.
Please
to post comments
What I never understood was the basic liability in NYT v Sullivan because all the NYT had done was print a paid ad from the NAACP.
Assuming they would also accept paid advertising from Sullivan (and they may not have), and assuming that the NYT made it clear that this was a paid ad placed by the NAACP, why were *they* liable?
Is the host of a public forum liable for what participants say, if it doesn't exercise editorial content over it?
Yes, until the advent of the internet.
With the potential exception of classified ads since the advent of widespread computerization, print newspapers make a very direct decision of what advertisements to accept, and where and when to run them. Similarly, traditional book publishers are liable for defamation in books they print, but print-on-demand publishers are not.
Because they chose to publish the ad. That's how defamation law works in the non-online context.
It has nothing to do with whether "they would also accept paid advertising from Sullivan"; that’s just irrelevant.
EV,
the copyright remains owned by the author, but the newspaper may be independently liable for defamation.
I wanted to ask if I am understanding the use of “independently”. That’s there to inform us that the newspaper and the author may both be liable. Right? I ask because it strikes me that some in comments are trying to find “the one and only one who is liable” for the act of defamation. But I think the reality is multiple parties can be liable for what some of us view as a single false defamatory statement. We view it as “one” statement because it’s essentially the same content “John Smith is a child molester”. But there are still multiple parties involved in stating it (and sometimes it is literally restated.)
Irene says to Jack, "Abe is a child molester."
Jack writes in his book, "Abe is a child molester."
Baker Book publishing publishes the book that states, "Abe is a child molester."
Cooper Book store sells the book that states, "Abe is a child molester."
Q. Who is liable, under various theories, for defamation?
A. All of them (Irene, Jack, Baker, and Cooper). There are complicating issues (such as the book store and a qualified immunity) but still.
Thanks Lokl13!
I heard a somewhat scurrilous rumor yesterday with one claimed fact that borders on libel (but which may very well be true!). My thought was "interesting.... Uhmm... don't repeat most the story and especially not that bit!" My concern wasn't so much my liability (thought I thought it might exist.) It was that a genuinely like the parties involved in the scurrilous rumor and would rather not spread it around whether true or not.
If I didn't like them, that whole "liability" bit would make me cautious of repeating. So it strikes me as quite right I should be liable even if I'm not the originator of the rumor.
Now let's make this a relevant story.
Hasbro makes an Ouija board.
Jack asks the Ouija board if Abe is a child molester and it says yes.
Jack writes in his book, “Abe is a child molester.”
Baker Book publishing publishes the book that states, “Abe is a child molester.”
Cooper Book store sells the book that states, “Abe is a child molester.”
Q. Who is liable, under various theories, for defamation?
If your answer includes "Hasbro", you're an idiot.
Well, there is a problem with your analogy. If Jack stands next to the Ouija board and asks "Is Abe a child molester", the planchette of the Ouija board isn't ordinarily going to move toward "Yes" or "No" on its own. Well, except in movies involving ghosts. Or in planchettes that have been altered by the user in ways that likely involve magnets.
So the manufacturer or the Ouija board's "protection" against libelous statements is that the Ouija board doesn't actually make statements. In court, the manufacturer would point toward the people whose hands were on the Ouija board operating it. The judge and jury would believe hands operated it. The manufacturer is not liable because their product did not actually make any claim. Some user did.
In contrast, if you ask "Is Abe a child molester" and ChatGPT response "Yes" (and likely elaborating with "evidence", ChatGPT actually did answer. It wasn't some other user laying on hands and pushing ChatGPTs planchette.
That's not a problem. Not touching the planchette is analogous to not using the chat bot.
So to your claim
You asking the question (or more accurately, typing the question into the prompt field) is you adding motion to the system, without which nothing happens. Typing in that field is you moving the planchette, it is you shaking the magic eight ball, it is you shuffling and drawing the tarot cards, etc. and so-on.
If you don't use to the tool, it doesn't do anything. It is incapable of autonomous action.
I manufacture a dangerous toy that may kill a child who plays with it. A child plays with it and dies. The parents sue me. My defense is "Well, if the child never played with the toy, it wouldn't have killed her. The toy is incapable of autonomous action."
Which wouldn't be a libel case.
Escher,
Sure.
No. You entering the question at the keyboard is you asking chatGPT the question.
With ChatGPT you then do nothing further. With the Ouija board, you or someone then continues to touch the planchette-- pushing it around.
With ChatGPT while you are doing nothing it writes and answer. With the Ouija board, the answer only arrives if you or someone is continuunig to touch it. Otherwise it stops. So it is not generating the answer. Those touching the planchette are.
I don't think a jury would see this differently than I do. Normal people can see that the answer is created by those touching the planchette.
... are you also confused that all the dominoes fall because you hit the first one?
Yes, things keep moving for a bit. But like a Rube Goldberg machine, it's all your fault.
Seems to me that if I set up a Rube Goldberg machine that can fire off a cannon in the direction of a house, someone sets of the machine, the cannon fires and destroys a house, I am liable for having recklessly aimed the cannon and setting up the sequence.
The person who tripped of the machine might also share liability, but I don't so how their being liable eliminates my liability.
But what do I know? I'm just an engineer who would think the engineer who created that Rube Goldberg machine should be liable. And honestly, I suspect they are. The possible result of damaging the house is forseeable, I designed and created the machine and I certainly could have made different equally entertaining machine. I'm not seeing how I should be indemnified because some person tripped the first action in the sequence (possibly not knowing what havoc would ensue.)
If you want to talk about liability, go for it.
Volokh is talking about libel, and until this post, so were you.
But don't segue from one to the other and pretend the arguments are the same.
Escher
He is talking about liability for libel.
You brought up the issue of dominoes and fault associated with their falling. That fault had nothing to do with libel-- but does touch on liability for a harm.
If you change the subject from liability for liable to liability for other harms you should get so grumpy other people follow down the trail you blazed.
Not enough information provided.
Jack asks the Ouija board if Abe is a child molester and it says yes. Jack's hands are the only ones on the board. Hasbro is not liable.
Jack asks the Ouija board if Abe is a child molester and it says yes. Irene's and Jack's hands are both on the board. Irene may be liable. But it would be impossible to prove without an admission.
Jack asks the Ouija board if Abe is a child molester and it says yes. Nobody's hands are on the Ouija board. An earthquake, a passing truck, the wind, or indeed a ghost, are not liable.
Jack asks the Ouija board if Abe is a child molester and it says yes. Abe's hands are on the Ouija board. Jack may still be liable. He would not be liable if he had said "Abe is an admitted child molester."
Welp, you didn't say "Hasbro", so you're in the clear for now.
lucia_l: That is correct. Moreover, the newspaper would be liable not just derivatively of the author (the way an employer might be derivatively liable, on a "respondeat superior" / vicarious liability theory, for the conduct of an employee) -- it would be liable based on its own misconduct (and depending on its own mental state).
Eugene,
Is this true in the case of a book review done by the newspaper. In paragraph 18 of the review, the newspaper writes, "In his new book, Former VP Mike Pence made the startling (and entirely unproven, to this point) allegation that staffer John Q. Smith molested children while working for the White House."
Can a newspaper really be liable for repeating this entirely false (let's assume) statement by Pence, as part of its comprehensive book review? We never covered this directly in my First Amendment Const. Law class, decades ago. But I always assumed that newspapers (et al) had some sort of "fair reporting" defense. I gather that I'm wrong about that, yes?
santamonica811, let’s see how I do when the lawyers weigh in.
First, Mike Pence is not someone I consider to be a reliable reporter about anything. And I have no notion what political entanglements he might be trying to sort out by accusing Smith. And I don’t like my newspaper making potentially libelous accusations against anyone unless two standards for evidence are met:
1. I consider the evidence to be factually right near the 100% certain range, better than reasonable doubt. That means I am never going with something like that allegation unless I have great evidence that I found on my own initiative. Thanks, Mike Pence, for alerting me to a story I can investigate from scratch. If it pans out, you will get mentioned.
2. I have the evidence from a source who can die before I go to trial, and the evidence I already have in hand will remain in a form that will withstand attack in court.
So, no, I’m not going with the Mike Pence blockbuster. It looks to me like the evidence is on quicksand. If I have the time, and a theory about how to go about it, I will not forget to investigate.
That said, I think the the fair reporting privilege is about repeating stuff said under oath in court. People lie in court. You can’t hold a newspaper responsible for reporting what happens in court proceedings.
santamonica811: Quite possibly liable; see this post for more. Some courts have recognized a "neutral reportage" privilege for certain kinds of reports of certain kinds of allegations, but it's complicated (the linked-to post discusses that). "Fair report" generally refers to fair report of allegations made in government proceedings.
We had something similar in the Steele Dossier reporting.
For months Fusion GPS (maybe they should change their name to Fusion GPT) was pushing the Moscow PeePee tapes to news media but they wouldn't touch the allegations because it was hearsay stacked on hearsay.
When it was finally reported the had to sanitize it (to the extent possible to sanitize it), by reporting it as the content of an intelligence briefing to Trump.
I doubt that meets EV standard of a "fair report of a government proceeding, because it was still unsourced non-public information, but it was still enough of a hook to report on it.
Shorter answer. Copyright protects creativity by granting the creator rights in his/her creation. Defamation protects persons from false factual statements that damage their reputation.
Only if Volokh is wrong.
Volokh has made it very clear that for his argument, whether or not anyone's reputation is damaged is irrelevant: the mere possibility that the program might produce a libelous statement, that might be believed by a non-reasonable person† is sufficient.
________
†Seriously, if you use one of these programs and immediately trust everything they say, you're an unreasonable idiot.
I asked the question in the comments. But, I didn't ask it very well.
My question was a level deeper. AI works do not get copyrighted because it is the output of an algorithm. Algorithm outputs are facts, not creative works regardless of the complexity. An algorithm that generates the Mandelbrot Fractal creates something infinitely complex and pleasing. But, it is merely a fact.
The square root of 16 is 4. That cannot be defamatory because it is a true fact that using the square root algorithm on the number 16 produces 4.
Even if the algorithm and its inputs are hidden from the user the algorithm outputs are still facts. If an algorithm decrypts a message with a hidden key and algorithm to get "Paul is a child molester," that is still a fact that the decrypted message is "Paul is a child molester."
LLMs are extremely large, opaque algorithms. They have hidden inputs generated by the owner/creator of the AI. But, the outputs are algorithmic from user inputs.
The word "fact" here is going to add confusion, not reduce it.
Dude, you're just talking yourself into suing Hasbro because an Ouija board called you a dick.
That's opinion. And the creators of AI would differ with you that they have created a Ouija board. When they promote it as such, then they would have a good defense to defamation.
Of course they would say they haven't created an Ouija board: that would be copyright infringement.
But other then that one developer who went around the bend and started claiming there was actual sentience, everyone actually working on these things is well aware that it's just a tool.
You meant trademark infringement.
Sure. As I have frequently stated, I am not a lawyer; I don't hate myself enough.
I gotta say, I’m having a lot of trouble following EV’s arguments and assertions here.
If AIs are, in fact, creating “new work,” then the defamatory content is being published (created) … by the AI. Which would require a sea-change in how we view parties and the law, which is not something I really see much grappling with yet. ( I should add that I don't think that current versions of AI that we are using are doing this, and that this would be a strange way of viewing it at this point ... but maybe not in the future).
OTOH, if they aren’t … if we view this simply as algorithms outputting information, then they are a tool- arguably, the person requesting the defamatory content (the person who gave the query) is the person responsible for the defamatory content. You request it, you created the information, you are responsible for what it has created.
I think that the various arguments trying to create tort liability to the companies that make the algorithms seem to be too clever by half.
lokl13,
I think ChatGPT does create "new work". For a more neutral example
Are you familiar with Citadel Group Ltd. v. Washington Regional Medical Center, 536 F.3d 757 (7th Cir. 2008)
That's a lot of stuff! It's totally wrong. I took my time asking a bunch of questions to see just how much detail it would make up. (A lot). Then I finally told it this
This is a link to a case:
https://scholar.google.com/scholar_case?case=280847062334440433&q=citadel+group+limited+v+washington+regional+medical+ctr&hl=en&as_sdt=400006
Visit. You will see ChatGPT's version of this case is entirely fictional.
This is the summary of the case
Nothing to do with software.
I first learned of this case in another "interview" of ChatGPT. That was equally hilarious because it made up all sorts of entirely different fiction about that case.
ChatGPT's "hallucinations" amount to "creating new facts" or just "BSing".
On the one hand, I think the above shows ChatGPT makes stuff up. But I also think it's worth showing just how creative it gets:
Me:
Can you cite cases where an independent contractor in Illinois was bound by a non-compete clause.
(Same case as above. But no, this case is also not about non-compete agreements.)
me to ChatGPT:
What was the name of the physician working as a private contractor in "Citadel Group Ltd. v. Washington Regional Medical Center, 536 F.3d 757 (7th Cir. 2008)" which you cited?
me: What was the name of the Hospital and the physician practice group in "Citadel Group Ltd. v. Washington Regional Medical Center, 536 F.3d 757 (7th Cir. 2008)" which you cited?
And so on.
I think juries would see this as ChatGPT "making things up". It's not merely getting a little jumbled.
"But I also think it’s worth showing just how creative it gets"
Maybe it should get a copyright?
Bored Lawyer, Giving it (or its owners) copyright strikes me as more fair than indemnifying it from liability for reputational harms it causes!!
I’d gladly share the copyright for our co-authorship including my eloquent prompts and its answers! ????
It's no more creative then a fractal pattern.
Might be pretty to look at, but there is no intent, no mind, and nothing capable of being creative involved.
I'm quite familiar with the technology (and the hallucinations). That's why there's the "black box" aspect of this type of deep machine learning that we see- we're not always sure why we are getting the outputs that we are getting.
That said, there is still a long way to go from deep machine learning (however good) to full-on personhood capable of being sued. If we view the outputs as the work of an independent creative agent, then I think we have other issues going on.
To put it in slightly different terms- if you teach an ape sign language, and the ape defames a person, then who (or what) is responsible for the defamatory statement by the ape? I think that if EV answers that question in a satisfactory manner, I'll have a better understanding of his actual analysis than trying to couch it in the way he is.
Loki, I was surprised by that from EV too. I had expected something like an assertion that the algorithm ought to be treated as an unreliable reporter, on whose report you could not reliably publish anything without risking a libel suit. But I do wonder about the rare case where the algorithm's bad report goes to exactly the wrong person—like the guy who was about to hire the job candidate, and checked him out with the AI, with the result that the candidate lost out because of a lie.
"the person requesting the defamatory content (the person who gave the query) is the person responsible for the defamatory content."
I don't understand this. The query is like:
- Searching a particular website for information on a question you have and pulling up a page on the website with some content.
- Calling a former employer or other reference for an honest opinion and thoughts on a potential hire.
- Asking a background check company for information on someone.
- Writing a letter to a magazine or newspaper asking them to write about a topic that you have questions about.
Maybe if the query was, "write me defamatory statement about John Doe and the Acme Corporation" then sure.
I haven't looked carefully, but I wonder if the defamation issue has come up with respect to the manufacturers of drug tests or polygraphs? There might be some related stuff there worth checking out.
I think most of these threads suffer from un-clarified commenter disagreements on premises, about what point of view the VC commenter assumes in relation to the AI. If you assume the AI is hard wired to publish its output to the internet as news for everyone, that seems a different assumption than someone who queries the AI privately, is not in a position to use misreported information to damage someone, and keeps the report to himself.
That said, it seems likely the principal mode of damage which AI chatbots will fall into is by pseudonymous second-hand repetitions of AI reports. Some folks will be motivated to treat those as if they were anonymous news sources.
I expect we are about to discover that internet lying pioneers like Alex Jones and Donald Trump have just scratched the surface. Coming soon will be new vistas disclosing opportunities to use bots to build entrepreneurial opportunities to lie for money. I expect lying chatbot algorithms will turn out to be prolific and inexpensive-to-operate attack machines. The bots will vastly outcompete humans as efficient generators of fake verisimilitude.
Efficiency will be but one of the advantages for which human liars might want to choose bots. There will also be opportunities to use them for legal cover. After a human's pseudonymous cover gets blown, he can still try to escape responsibility by saying he gullibly trusted the bot.
If it turns out that bots can be branded as libelers, then that might paradoxically increase their usefulness, to take the fall for humans who need legal cover to dodge their own responsibility. Imagine the circuses you could get in court, as lawyers compete to confuse jurors about whether chatbots can be trustworthy, or even if they cannot, whether it is reasonable to punish a gullible trusting human who mistakenly thought otherwise. As Alex Jones has demonstrated, resources to hire good lawyers are easy to come by, if you can keep lying fast enough on the internet.
Stephan
Sure. And someone who queries the AI privately, is told something damaging does not pass it on but does act on it is yet another thing that happens. If I decide not to hire a nanny because the bot falsely claims they are a child molester, the nanny is harmed even if I don’t pass that on to other people interviewing nannies. I don’t further defame the nanny, but the nanny was harmed by the Chatbot’s false and defamatory statement.
There are always three distinct parties in these defamation claims: (1) the party who ‘uttered’ the defamation, i.e. ‘the source’. (2) the party who heard or listened to the defamation. (3) the party harmed by the false story: the defamed.
Harm is can be caused by the source’s defamatory statements even if the listener doesn’t further defame the party defamed.