The Volokh Conspiracy
Mostly law professors | Sometimes contrarian | Often libertarian | Always independent
Defamation, Responsibility, and Third Parties
A bunch of comments to my Large Libel Models posts suggest that, when users believe (say) ChatGPT-4's fake quotes about others, the true responsibility is on the supposedly gullible users and not on OpenAI. I don't think this is consistent with how libel law sees things, and I want to explain why.
Say that the Daily Tattler, a notoriously unreliable gossip rag, puts out a story about you, saying that "rumor has it that Dr. [you] had been convicted of child molestation ten years ago in Florida, as the Miami Herald reported." This is utterly false, and the result of careless reporting on their part; there was no conviction and no Miami Herald report. Yet some people believe the story, and as a result stop doing business with you. (Say you're a doctor, so your business relies on people's confidence in you.)
Now there are three parties here we can think about.
- There's you, and you're completely innocent.
- There's the Daily Tattler, which published a story that's negligently false.
- And there are the people who stop doing business with you. They too might be viewed negatively: Perhaps they're gullible for believing what the Daily Tattler says. Perhaps they're unfair in not looking things up themselves (maybe checking the Miami Herald's archives), or calling you and asking your side of the story.
But the premise of libel law is that you can sue the Daily Tattler, even though, in a perfect world, the readers would have done better. You can't, after all, sue the readers—it's not a tort for them to avoid you based on their gullibility. And the Daily Tattler is at fault for negligently putting out the false assertion of fact that could deceive the unwise reader. Yes, perhaps people should be educated not to trust gossip rags. But so long as readers do in some measure trust them (at least as to matters where the reader lacks an incentive to do further research), libel law takes that into account.
Now to be sure the law doesn't always allow liability for publishers based on all unwise reactions by readers. In particular, the question whether the statement "state[s] or impl[ies] assertions of objective fact" turns on the reaction of a reasonable reader. A statement that a reasonable reader would recognize is parody, for instance, wouldn't be actionable even if some readers might miss the joke.
But when it comes to statements that a reasonable reader would perceive as factual assertions, they are potentially actionable if they are false and reputation-damaging. That the reader might be unwise for trusting the source doesn't get the source off the hook.
So if you sue the Daily Tattler for negligently publishing the false allegation against you, the Tattler can't turn around to say, "It's not our fault! It's the fault of the stupid readers who trusted us, notwithstanding our having specifically labeled this as 'rumor.'" Under well-established libel law, it would lose.
Now maybe there's some public policy reason why OpenAI should be off the hook for ChatGPT-4 communications, because it has warned people that the communications may be inaccurate, when the Daily Tattler isn't off the hook for its communications, despite its warning people that the communications may be inaccurate (since they're just rumor). But standard libel law seems to take a different view.
[* * *]
Here's what Part I.C of my Large Libel Models? Liability for AI Output article has to say about the general legal background here; note, though, that I had posted an earlier version of that chapter last week.
AIs could, of course—and probably should—post disclaimers that stress the risk that their output will contain errors. Bard, for instance, includes under the prompt box, "Bard may display inaccurate or offensive information that doesn't represent Google's views." But such disclaimers don't immunize AI companies against potential libel liability.
To begin with, such disclaimers can't operate as contractual waivers of liability: Even if the AIs' users are seen as waiving their rights to sue based on erroneous information when they expressly or implicitly acknowledge the disclaimers, that can't waive the rights of the third parties who might be libeled.
Nor do the disclaimers keep the statements from being viewed as actionable false statements of fact. Defamation law has long treated false, potentially reputation-damaging assertions about people as actionable even when there's clearly some possibility that the assertions are false. No newspaper can immunize itself from libel lawsuits for a statement that "Our research reveals that John Smith is a child molester" by simply adding "though be warned that this might be inaccurate" (much less by putting a line on the front page, "Warning: We may sometimes publish inaccurate information"). Likewise, if I write "I may be misremembering, but I recall that Mary Johnson had been convicted of embezzlement," that could be libelous despite my "I may be misremembering" disclaimer.
This is reflected in many well-established libel doctrines. For instance, "when a person repeats a slanderous charge, even though identifying the source or indicating it is merely a rumor, this constitutes republication and has the same effect as the original publication of the slander."[1] When speakers identify something as rumor, they are implicitly saying "this may be inaccurate"—but that doesn't get them off the hook.
Indeed, according to the Restatement (Second) of Torts, "the republisher of either a libel or a slander is subject to liability even though he expressly states that he does not believe the statement that he repeats to be true."[2] It's even more clear that a disclaimer that the statement merely may be inaccurate can't prevent liability.
Likewise, say that you present both an accusation and the response to the accusation. By doing that, you're making clear that the accusation "may [be] inaccurate." Yet that doesn't stop you from being liable for repeating the accusation.
To be sure, there are some narrow and specific privileges that defamation law has developed to free people to repeat possibly erroneous content without risk of liability, in particular contexts where such repetition is seen as especially necessary. For instance, some courts recognize the "neutral reportage" privilege, which immunizes "accurate and disinterested" reporting of "serious charges" made by "a responsible, prominent organization" "against a public figure," even when the reporter has serious doubts about the accuracy of the charges.[3] But other courts reject the privilege altogether.[4] And even those that accept it apply it only to narrow situations: Reporting false allegations remains actionable—even though the report makes clear that the allegations may be mistaken—when the allegations relate to matters of private concern, or are made by people or entities who aren't "responsible" and "prominent."[5] It certainly remains actionable when the allegations themselves are erroneously recalled or reported by the speaker.
The privilege is seen as needed precisely because of the general rule that—absent such a privilege—passing on allegations can be libelous even when it's made clear that the allegations may be erroneous. And the privilege is a narrow exception justified by the "fundamental principle" that, "when a responsible, prominent organization . . . makes serious charges against a public figure," the media must be able to engage in "accurate and disinterested reporting of those charges," because the very fact that "they were made" makes them "newsworthy."[6]
Likewise, the narrow rumor privilege allows a person to repeat certain kinds of rumors to particular individuals to whom the person owes a special duty —such as friends and family members—if the rumors deal with conduct that may threaten those individuals. (This stems because from what is seen as the special legitimacy of people protecting friends' interests.[7]) This is why, for instance, if Alan tells Betty that he had heard a rumor that Betty's employee Charlie was a thief, Alan is immune from liability.[8] But the privilege exists precisely because, without it, passing along factual allegations to (say) a stranger or to the general public—even with an acknowledgement that they "may [be] inaccurate"—may be actionable.[9]
Now a disclaimer that actually describes something as fiction, or as parody or as a hypothetical (both forms of fiction), may well be effective. Recall that, in libel cases, a "key inquiry is whether the challenged expression, however labeled by defendant, would reasonably appear to state or imply assertions of objective fact."[10] It's not actionable to state something that obviously contains no factual assertion at all—as opposed to just mentioning a factual assertion about which the speaker expresses uncertainty, or even disbelief.[11] But neither ChatGPT nor Bard actually describe themselves as producing fiction, since that would be a poor business model for them. Rather, they tout their general reliability, and simply acknowledge the risk of error. That acknowledgment, as the cases discussed above show, doesn't preclude liability.
[1] Ringler Associates Inc. v. Maryland Casualty Co., 80 Cal. App. 4th 1165, 1180 (2000).
[2] Restatement (Second) of Torts § 578 cmt. e; see also Martin v. Wilson Pub. Co., 497 A.2d 322, 327 (R.I. 1985); Hart v. Bennet, 267 Wis. 2d 919, 944 (App. 2003).
[3] Edwards v. National Audubon Soc'y, 556 F.2d 113 (2d Cir. 1977). A few later cases have extended this to certain charges on matters of public concern against private figures. Others have rejected the privilege as to statements about private figures, without opining on its availability as to public figures. See, e.g., Khawar v. Globe Int'l, Inc., 965 P.2d 696, 707 (Cal. 1998); Fogus v. Cap. Cities Media, Inc., 444 N.E.2d 1100, 1102 (App. Ct. Ill. 1982).
[4] Norton v. Glenn, 860 A.2d 48 (Pa. 2004); Dickey v. CBS, Inc., 583 F.2d 1221, 1225–26 (3d Cir.1978); McCall v. Courier-J. & Louisville Times, 623 S.W.2d 882 (Ky. 1981); Postill v. Booth Newspapers, Inc., 325 N.W.2d 511 (Mich. App. 1982); Hogan v. Herald Co., 84 A.D.2d 470, 446 (N.Y. App. Div. 1982).
[5] A few authorities have applied this privilege to accurate reporting of allegations on matters of public concern generally, but this appears to be a small minority rule. Barry v. Time, Inc., 584 F. Supp. 1110 (N.D. Cal. 1984); Tex. Civ. Code § 73.005.
[6] Edwards, 556 F.2d at 120. Likewise, the fair report privilege allows one to accurately repeat allegations that were made in government proceedings, because of the deeply rooted principle that the public must be able to know what was said in those proceedings, even when those statements damage reputation. But it too is sharply limited to accurate repetition of allegations originally made in government proceedings.
[7] Restatement (Second) of Torts § 602.
[8] Id. cmt. 2. Another classic illustration is a parent warning an adult child about a rumor that the child's prospective spouse or lover is untrustworthy. Id. cmt. 1.
[9] See, e.g., Martin v. Wilson Pub. Co., 497 A.2d 322, 327 (R.I. 1985).
[10] Takieh v. O'Meara, 497 P.3d 1000, 1006 (Ariz. Ct. App. 2021).
[11] See, e.g., Greene v. Paramount Pictures Corp., 813 F. App'x 728, 731–32 (2d Cir. 2020). Even then, a court might allow liability if it concludes that a reasonable person who knows plaintiff would understand that defendant's ostensible fiction is actually meant to be as roman à clef that conveys factual statements about plaintiff. The presence of a disclaimer wouldn't be dispositive then. See, e.g., Pierre v. Griffin, No. 20-CV-1173-PB, 2021 WL 4477764, *6 n.10 (D.N.H. Sept. 30, 2021).
Editor's Note: We invite comments and request that they be civil and on-topic. We do not moderate or assume any responsibility for comments, which are owned by the readers who post them. Comments do not represent the views of Reason.com or Reason Foundation. We reserve the right to delete any comment for any reason at any time. Comments may only be edited within 5 minutes of posting. Report abuses.
Please
to post comments
Isn't that more or less the point that people are making—that a reasonable reader wouldn't interpret ChatGPT output as a factual assertion? Not just because of the disclaimer (though that's certainly part of it), but rather taking into account the full context of the program?
(I agree this is a more difficult case to make as regards products like Bing that are explicitly marketed as being able to provide factual answers, and probably some of OpenAI's other GPT implementations.)
Off topic, but I'm wondering what chatbots say about each other.
EV To win the case, you should be able to find a plaintiff who's opinion of someone was genuinely changed. So far, you have only presented examples of someone who set out to find dirt and succeeded in finding dirt.
Do you have a supply of people who would innocently go to an AI to find information about R.R. and then believe whatever the AI says?
Archibald Tuttle: As you might gather, I'm not litigating anything here. I'm writing a law review article about what kinds of lawsuits might be filed, especially once these AI systems become more heavily integrated into various existing applications, such as search engines. If it turns out that nobody actually runs any such queries, or nobody ever believes them, that would be excellent. I'm inclined to think, though, that some people might run them and believe them, and some of their subjects might therefore be damaged. If that happens, then I take it the analysis in my article will be useful. Time will tell!
Seems like the devil will be in showing that OpenAI was negligent or acted with actual malice, whichever standard applies.
Maybe there was a post on that and I missed it.
Coming up soon! Or you can see the material in the full rough draft.
Thanks!
I would be interested in why a company cannot copyright work by AI but the company can be held liable for the AI's statements.
If AI is an algorithm operating on inputs that cannot be creative, how can the output be libel? The output of AI should just be a fact: the AI says X.
I think such an argument should be strong when arguing that there is no actual malice.
Copyright requires creativity. Nothing in defamation law does. Repeating a statement doesn't get you copyright on it, but, as shown in the article, repeating a falsehood often gets you liability for it. "The AI says X" often doesn't shield anyone.
My point isn't that defamation and copyright both require creativity. My point is that both creativity and defamation require something beyond reporting the result of an algorithm.
If an algorithm says that a defendant is likely to skip bail, that is not defamatory. The algorithm could be wrong. But repeating its results its merely repeating a fact about the algorithmic output.
This is more akin to the Babylon Bee rather than any sort of Daily Tattler. Almost everything I've seen about the chatbots mentions the hallucinations, not merely their terms of service, and there are lots of examples floating around the net making fun of what its making up. The very fact that its so easy to get it to make things up should suggest its difficult for people to avoid running into the reality that they can make things up.
While lawyers may write using analogies to past cases: new situations aren't an exact match to the past. It is possible to decide: this is different enough that its time we will handle it differently than those similar but not quite the same cases from the past, a tipping point has been reached.
There is a difference between how courts might rule if they only rely on past case law and precedents: and how perhaps they should rule if they think things through differently.
Perhaps society needs to adapt to new technologies rather than forcing those technologies to be crippled to cater to the lowest common denominator with poor critical thinking skills. In general, regardless of whats done with chatbots: society needs to learn to be more careful to validate information they receive.
Perhaps chatbots could serve as a good object lesson for people throughout society to teach others to validate information better. Even without publicly available chatbots: there will be a rise in the ability for persuasive misinformation to be created by AI (pandora's box is open, its out there even if the big ones were shut off), and spread like fake photos and intentional invented false news.
Perhaps this is a good tool and time to focus on that. Its a concrete thing we can point to as an object lesson: "even though this may sound factual, as you can see from these other sources it isn't. Beware of the same thing happening on social media from things you think are from other humans, validate things by....".
Perhaps trying to apply a legal framework based on human action to the outputs of a generative AI algorithm isn’t the best way to go. Human beings have a legal duty to exercise a standard of care when writing or speaking about others. Humans want to convey information, but have to balance that with the right that others have in their reputation. Do generative AI algorithms have the same legal duty as humans with regards to conveying information about others? Do the creators of the algorithm have such a duty?
If a person is standing in the town square and starts making false claims about another individual, I have no problem holding them legally responsible, because they should know better, both ethically and legally. If a machine with generative AI and all the data from the internet is in the town square making false claims about another individual, I have trouble holding it to the same degree of responsibility that I would a thinking, reasoning, human being. The machine is doing what it was programmed to do based on the code and input data that it was given. It cannot weigh the information it has to discern truth from fiction, nor can it factor in an ethical or legal obligation to err on the side of caution when making claims that can harm another’s reputation. I doubt it even knows the difference between a claim that is harmful and one that is not.
AI does need a legal framework, but our current legal system is based on the conduct of humans possessed with reason. There are fundamental differences between a human and AI, and a legal framework for AI should reflect those differences.
The aspect of our human-based legal system that immediately comes to mind as a possibly analogy is the treatment of minors as opposed to adults. Why are minors (or, to use the historical term, infants) treated differently under the law from adults? Does an 8 yr old have the same legal duty as an adult when it comes to publishing defamatory content? If not, why?
The liability would be imposed (or not) based on the actions of OpenAI, an organization of human beings, in designing and operating ChatGPT, not based on the actions of ChatGPT. It's like products liability claims for self-driving cars: They would be based on the actions of Tesla, the company, not of an individual Tesla, or of any underlying AI programs operating a Tesla.
I'll note that Section 230 arose because of conflicting views of how to deal with net content: the bookstore model where those that don't review content aren't responsible for it, vs. attempts to hold the "evil" corporations responsible. It may be that something equivalent is needed to provide a more rational response to these, rather than an emotional one where people blame the deep pocketed entity rather than the user.
re: "To begin with, such disclaimers can't operate as contractual waivers of liability: Even if the AIs' users are seen as waiving their rights to sue based on erroneous information when they expressly or implicitly acknowledge the disclaimers, that can't waive the rights of the third parties who might be libeled."
The terms of service though do require the users to defend OpenAI from any claimed problems arising out of the use of its service. Presumably that includes any libel claims made against OpenAI (even though as I've noted before: it is a tool, the user created the content).
Regardless of what courts have decided in the past: the issue seems to be some user that falsely interprets a statement to be true without validating it. They are the ones that err in doing so so its disturbing that people try to (and might get away with) trying to absolve them of responsibility for that mistake in order to blame an entity with deeper pockets. I suspect that temptation, to find $ and an "evil" corporation", at fault is why the existing system might be skewed in what sounds is flawed at times.
The problem is that you simply don't understand what this means, legally. Let's say that I sue OpenAI and you have an obligation to defend and/or indemnify them. That doesn't mean that OpenAI gets out of the case or that I can't collect money from them. It means that after I do so, OpenAI can then demand that you write a check to OpenAI for its legal fees and for the judgment. If you don't happen to have enough money to pay OpenAI those sums, then OpenAI is out of luck.
No, the user did not create the content. An obvious illustration of that is that I can ask the AI to generate code for me in a specific programming language to accomplish a specific task. I couldn't possibly create that content because I don't even know the computer language I specified.
re: "The problem is that you simply don’t understand what this means, legally"
No the problem is that the public doesn't grasp what it means legally, that they would be blamed and drawn into it and therefore should exercise more caution. The point is that the user is taking responsibility: even if with the current broken legal system OpenAI could still be held responsible.
re: "No, the user did not create the content. An obvious illustration of that is that I can ask the AI to generate code for me in a specific programming language to accomplish a specific task. I couldn’t possibly create that content because I don’t even know the computer language I specified."
Vast numbers of computer programers have no knowledge of the instruction set of CPUs and couldn't program one that way. They use compiler programs to take something written in a more human understandable language like C to translate that into something the computer can execute that they couldn't have created themselves. Yet they call themselves the publisher of that executable program, vast sums of money have been generated based on the author of that high level language being the publisher and not the vendor that created the compiler.
This just uses an even higher level language, natural language, to generate an output, whether a program text or a text in English. Its an inanimate thing and the only human agent responsible for its creation is the person that uses the tool. This tool just happens to not be very predictable.
And that makes it completely different. A compiler undertakes a mechanical process; it only translates exactly, 'word for word,' (so to speak) my C code into machine language. AI does not translate; it does its own thing. Indeed, if I ask it the exact same thing twice it may give me two different responses.
You "program" the chatbot in english, and english is inherently ambiguous and therefore its to be expected that you can't accurately predict the output.
Yes: some tools aren't deterministic. That doesn't somehow prevent them from being tools and lead to them taking on agency in the way humans have. At what point in the progression of a simple tool to a complex one that takes natural language instruction does the tool magically absolve the human of agency? As people suggest with chatbots: reason step by step. Show the breakpoint and why you claim it happens. When does the tool somehow absolve the human of responsibility?
When does someone firing a simple gun and held responsible somehow lose responsibility if they fired an experimental AI gun that aims at the nearest warm body, but might miss by 10 feet? The person firing the gun is taking that risk, the gun isn't at fault for having a margin of error and not being perfect.
The gun isn't at fault because it doesn't have volition. The manufacturer of the gun is at fault because it does have volition, and recklessly designed a gun that, when used as intended by a customer, had a significant chance of injuring third parties.
re: "The manufacturer of the gun is at fault because it does have volition, and recklessly designed a gun that, when used as intended by a customer, had a significant chance of injuring third parties"
In this case, the manufacturer would have designed a gun that users wanted for whatever reason. Merely because you don't think users should want that or be allowed to have it doesn't mean others shouldn't be allowed to. Perhaps the currently broken legal systems tries to inappropriately do what you describe: that doesn't mean it should. It suggests finding ways that new situations should be framed to avoid an attempt to prevent humans from taking responsibility for their actions and using tools that they wish: even if others don't like those tools and are free not to use them.
No. The ambiguity of English isn't sufficient to explain it; as I mentioned, you can put in the exact same prompt twice and get two different responses.
I never stated that ambiguity in english explains different outputs in this case: that is due to randomness injected into their choice of followups. My point was English is inherently not fully predictable as a programming language due to its lack of precision. Humans leave out information in their communications that assume other humans will fill in the gaps due to their background knowledge, and the way they fill it in may vary and may or may not match what the first person intended.
"maybe there's some public policy reason why OpenAI should be off the hook for ChatGPT-4 communications" -- e.g. that letting OpenAI be sued would kill the development of potentially very useful products, because there's no way to develop them while guaranteeing against libel. Newspapers can hire fact-checkers, but what, exactly, should OpenAI do?
Well, perhaps what OpenAI should do is not release its product until the product can be guaranteed against libel.
Alternatively, OpenAI can pay damages when its product harms people.
Your approach is the flawed "precautionary principle" progressives push of 100% safetyism where new products must be guaranteed to cause no harm before being released. Usually most people reading a site like Reason would grasp that from a societal perspective: that ignores the harm done from *not* releasing a product, the opportunity cost, the "unseen" thats being ignored for the easily "seen".
The reason (aside from during the pandemic) the FDA is too slow to approve new products is because they are punished for "seen" harm and the "unseen" cost of delays that cost lives is ignored.
Regardless of what the current legal framework night be interpreted to say by those who don't grasp the tech: many suggest its better to use imagination to explore the bigger picture and consider if there might be a better framework rather than just sticking to the status quo as if that were guaranteed to be the best possible approach. Try learning from the entrepreneurial mindset of the tech folks you wish to squash.
What about all the "unseen" harm done by keeping a product off the market, the opportunity cost? Its unfortunately hard to predict, for instance we can't know if letting AI develop would lead to some new method to create vaccines that'll save millions of people in a new pandemic in 5 years.
Not sure what lives are being saved or even improved by chatgpt. The predominant model seems to be getting these chatbots to produce copy and imagery humans used to produce, then get the humans who used to produce it to revise and edit the chatbot product for less pay. It seems specifically designed to make it more difficult for people to earn a living. Oh, and also the generation of overwhelming quantities of dis-and-mis-information. Which has been the main effect of tech for the last, what, twenty years or so? Narrowing the argument over accountability for societal harms to legal wrangling over who, exactly, gets sued for libel facilitates this.
There is of course vast uncertainty in this:
https://www.bloomberg.com/news/articles/2023-03-27/goldman-says-ai-will-spur-us-productivity-jump-global-growth
" Goldman says AI adoption could boost annual world GDP by 7%
The team estimated that “generative AI” could raise US labor productivity by roughly 1.5 percentage points per year over a decade. Such a jump would be dramatic — productivity only expanded 1.3% on average in the decade through 2022, undermining wage growth.
...That would be roughly equivalent to $7 trillion. "
Aspects of the same tech is being used to discover new potential medicines, and it'll aid in the business process of producing them and distributing them. Similar tech is involved in other aspects of drug discovery that benefits from the hardware and software infrastructure even if the top level software is different.
I expect very little of that will go to anyone who isn't a billionaire already.
In the draft paper (hadn't checked to see if its still there) was this quote:
"Allowing § 230 immunity for libels output by an AI program would completely cut off any recourse for the libeled person, against anyone"
It occurs to me to wonder: if the AI only told some random user, how does the libeled party know they were libeled? Are they mind readers? It seems the only way they'd know is if the user involved tells others about the statement at issue: and in that case it seems the user involved is the one that should be held liable for making statements they didn't exercise due care to validate before spreading them as "fact".
It would seem problematic for society to allow them to claim as a defense that "but the AI told me!". Its the user in that case that should be held negligent for spreading the information: but the legal system seems to have let the progressive world distort it to try to avoid individuals taking responsibility and finding any excuse possible to instead go after "evil" corporations and blame them for the mistakes of an individual.
In terms of the libel that only existed in that person's mind, if they didn't tell anyone, its academic and there is no way for the legal system to be involved. The only remedy for this theoretical libel no one knows about seems to be that people should be educated to exercise reasonable due care before believing or acting on a statement from the AI as if it were fact. The user might also have believed some mistaken review on a site from a human. The world is full of flawed information, whether officially "libel" or not, and its useful to get society to learn to get better at evaluating information.
The quote from the draft, and the post above sounds like the issue is that the current legal system has evolved to try to find ways to go overboard so people aren't held responsible for their own flawed thinking if there is any excuse to do so, even if a reasonable person wouldn't make the mistake. It seems the legal world has a self interested bias in enabling causes of action wherever it can find some excuse to blame another party, preferably deep pocketed.
Unfortunately, from a public choice type consideration of incentives that may be implicit and unconscious it seems natural that it would evolve in directions that would lead to more causes of action, regardless of whats best for the public. Judges and legislators who are attorneys are also going to retain the same biased mindset even if they don't consciously think of the fact they've been trained to involve the law in everything.
It seems actually that likely *most* of the feared "libel" that people are concerned about will only be in the minds of the individuals who see the flawed information. If the vast majority of it is likely "invisible" libel where there is no direct legal system remedy: it seems as I noted that the societal remedy should be to focus on educating people about the fact AI can be wrong (if there are any that willfully ignore all the information telling them that now and all the examples of it).
If that takes care of most cases then for the cases where people spread the AI's statement to others: it seems the better societal remedy for the other cases is to hold the user responsible (as they are) to serve as an object lesson to the public and for the press to spread so that there is motivation to teach the public to not trust AI.
It seems society would be well served by teaching people to cope better with misinformation by trying to validate things more. This can serve as an object lesson, rather than trying to destroy a new industry since it isn't dumbed down enough for people who refuse to take responsibility for validating information and wish to blame evil tech or corporations, and the attorneys that want to profit from it or have influence by spreading the idea of going after AI vendors and absolving users of responsibility for their own thinking process.
To illustrate a point I'll make, a Wharton prof, Ethan Mollick noted earlier today that he'd used AI to create a tool to feed a prompt to AI at the push of a button and then noted:
https://twitter.com/emollick/status/1640383167897935874
"Devils Advocate: Give me the argument against this point"
I don't recall the name, but there was an AI company's sidebar that also had a button to get counterarguments against whatever was selected.
AI can allow tools to be created that help people critique information: including the information from another AI (or the same one), even if the original output isn't guaranteed to be accurate.
Its doubtful the tech can prevent hallucinations easily: but tools can be used to help people with the process of searching for sources to validate it themselves.
The existence of the problem of hallucination, despite tools being useful, is a market opportunity for entrepreneurs and a research opportunity for tech folks who like solving problems.
Those tools may as a side effect help with the proliferation of flawed information from humans, whether technically libel or not. Perhaps not: but we'll never know if AI startups are squashed, the way net companies feared being squashed if Section 230 hadn't codified any confusion.
Part of the reason for Section 230 was to enable a new technology to evolve without it being potentially squashed by people who didn't grasp the tech and its potential and tried to use poor analogies from old laws and ways of looking at things.
Unfortunately I suspect too few people in the legal world have an entrepreneurial mindset. I see many posts here from people that seem to have difficulty with the idea of brainstorming and evolving ideas and trying to see the point of an idea to build on it: instead falling back on existing status quo thinking and ignoring outside voices. They don't seem to think the way entrepreneurs or tech folks do: but they should understand the arena they are trying to regulate, including the tech and its limitations.
Entrepreneurial solutions may turn out to be better for the public in the long run than government or legal approaches, just as many libertarians feel private solutions would work better than many things done by government but that government often steps in and pre-empts the rise of something that'd be better in the long run. Shortsighted short term gain at the expense of longer term prosperity.
Inventions don't happen instantly despite the impatience of politicians, and apparently lawyers, who wish to "do something!", to rush to use the particular toolset they have to address the problem: even if its not the best solution for the public in the long run, merely the "hammer" they have so they see this as a nail.
Perhaps there are some set of identifiable "libel" harms they focus on now: ignoring the larger harm that might be done by a misguided squashing of a new tech prematurely.
Evolving tools to help play devils advocate and validate things requires that the field not be squashed prematurely by those who are so caught up in the current state of things that they neglect to consider that it will evolve, even if not in the ways they think.
Yet it seems even folks who choose to have their blog at a place like Reason, given its techno-optimist dynamicist libertarian bent which would seem more likely to focus on granting the ability to take the risk to use AI and take responsibility for doing so are by default trapped in old mindsets. That doesn't bode well for the future if it helps progressives target big tech with big pockets: and likely take out startups instead even if big tech manages to find a way to continue despite efforts to prevent.
I see that risk in the writings here. Many posters seem hostile to trying to use their imagination to explore ideas and dismissive of those who are actually aware of and involved in the industry and/or heavy users of the tech. Those who are concerned with the real world issues of whats best for the public: not what a flawed legal system has come up with. I'd suggest there is a lot of "anchoring" and "status quo" bias going on with people seemingly clinging to the old ways of looking at things too strongly rather than stepping back to look at the big picture of whether they should look at this differently.
Say someone kills another person using a gun: should the outcome be blamed on the gun and the manufacturer of the gun responsible? There is only 1 human agent involved and they are responsible for the use of the tool.
Lets say the gun used new tech that tried to aim itself at the nearest warm target to where its pointed, but wasn’t perfect and might hit anything with 10 feet of the person aimed at. The person was aiming at a deer but hit a person within 10 feet who they should have known was at risk (like the user of this should know false statements are possible). Should the gun manufacturer be held responsible for the user of the gun because there was a chance it wouldn’t hit the person he aimed at? Isn’t the human the agent responsible? Lets say the gun is using AI to aim itself, is it somehow now the AI’s responsibility or the AI vendor? Or is it still the human that took the risk?
Someone is fighting the copyright office that wouldn’t copyright their book with AI images. They explain they went to lots of effort to create those images: they grasp that they are the author, the publisher , the person responsible for those images. The person who uses a tool should be viewed as the author, publisher (unless they delegate that elsewhere),etc. They are the only human agent involved. The AI isn’t a human agent and shouldn’t be viewed as having agency or legal responsibility for the content.
Why are you posting the same wrong thing that proves your argument wrong in every thread?
I realized most people don't look at old pages and posted on the newer one when I realized the analogy since most libertarian-leaning folks object to trying to hold gun manufacturers libel: but some here seem to struggle with the idea of making the analogy to this case. So I realized perhaps an intermediate example bringing in AI might lead them to perhaps more carefully think things through rather than relying on unquestioned ways of looking at things.
Why are you yet again handwaving and making an unproductive assertion of something as being "wrong" without actually arguing the point? I see no reason to consider your assertion as credible without actual argument. In the tech world people tend to learn that its possible for anyone to be wrong and therefore the argument needs to be subject to scrutiny (as JS Mill also noted). In the tech world people are used to exploring ideas, some of which may be wrong but refined to match reality better.
I think the week link in your argument is the assumption that the output is an *assertion* by the company which has put up the AI.
For instance, as you have observed if you merely randomly generate words the fact that they sometimes say false things isn't libel because those aren't assertions by the speaker.
You are correct that the value in the AI is that the results are often accurate. But there is a huge difference between selling something because it often generates useful responses and asserting them.
For instance, the value in a Google search is that it generates useful and true information in a high percent of cases. Indeed, that's how google advertises it. However, that's not the same as asserting the webpages it returns are accurate. Sure, they often are and that's why google's search has so much value but google still isn't asserting they are true and neither are the hosters of the AI.
Let me note a very counterintuitive consequence of your argument. Suppose that GPT was very inaccurate. Say, it gave a totally false answer 50% of the time. It seems like your argument would establish that it was still libelous.
Hell, suppose it only gave a correct answer 1% of the time. In some areas that could still be very useful (eg if the police have no idea who a serial killer is a guess that's correct 1% of the time would be very valuable) and the creators might try to sell it based on its ability to give an answer that's correct 1% of the time.
As your argument shows here the mere fact that a site isn't very reliable isn't enough to make it non-libelous. But that's a problem for your arg since now even a program that puts together random related strings of words would make the hosters liable for libel. I mean it can't be that merely making the words slightly more accurate (and correctly telling ppl how frequently it's correct) makes you liable.
The issue is that even though GPT is valuable because it often says the right thing the authors' attitude to the output is no different than if it produced random strings. They are merely saying, "here's some neat randomly generated text that might be relevant" but aren't actually asserting the truth of those claims.
That's why even a news site that only produces false stories can be liable for libel. They are asserting those claims even if it would be unwise to believe them. This is the reverse. Even though it might be reasonable to believe the claims (like it would be reasonable to believe the first result google returns is accurate) they aren't asserting those claims.
A lot of this debate about the accuracy of these chatbots seems to ignore the capacity for malicious use, which is massive. Talk about being able to flood the zone with bullshit - now it can be fully automated. Everyone's got their own pocket 4chan to throw shit at the walls until something sticks. Never mind libel - say I'm correct, and worst case the pinnacle acheivement in the history of communications so far, ie the internet, is rendered functionally unusable, will government regulation to curb such a problem, ie regulate the use of chatbots (if that's even possible) - is that an infringement of the 1st Amendment?
Ok, I think I’ve spotted where status quo thinking led this astray and where the handwaving is, in addition to avoiding looking for what will reduce the level of harm: even if it isn’t through the law. The starting point should be thinking about principles and what should be, rather than prematurely attempting to use perhaps flawed analogies from existing legal precedents to determine how things might be decided currently. [note: if I had more time, and sleep, this message would be shorter and clearer, sorry..]
The status quo involves content from human agents and is therefore potentially distinguishably different. Perhaps the choice is made not to distinguish it: but it seems the logic for how to deal with it should be solid before even attempting to decide whether existing rules should be applied to a new situation.
There is user U and unknown person X about whom some false statement is displayed by an AI.
The claim for any harm of a statement as “libel” is that person U believed that false statement about X. Its that act of believing that is the issue, with the question being responsibility for that. (setting aside the issue I raised above of how magically X knows whats in U’s head to claim to be libeled, and whether that process of involving an outside observer changes things).
re: “that can’t waive the rights of the third parties who might be libeled.”
That issue isn’t that the user U waived the rights for X and took the risk for them. The issue is that user U took responsibility for making the decision whether to believe a statement from the AI, acknowledging it was their choice to engage in that act of belief or not. They did so either by closing popup before they get to ChatGPT acknowledging that it may make false statements, or (more arguably) via the reality that there is general information out there a reasonable person would be aware of about hallucinations, or (arguably) the terms of service they ideally should consider. The user should be viewed as treating the output as if it could have come from a monkey randomly typing things with no warranty as to its factualness.
Remember: this is a new class of tool they are using, it is *not* a human publication so analogies from that world may be flawed precisely because there are other human agents involved in those prior cases. In this case: the immediate human agent involved in the belief is only user U: there are no others around and involved in that choice that makes the “libel”.
The issue is that user U acknowledged that the AI may display statements that aren’t true and therefore took responsibility for being the party to determine whether those statements are true or not. There is no human from OpenAI inside users U’s head when that choice is made: user U is the only one who can control whether the libel of believing the false statement occurs.
Section 230 was based on the rational bookstore analogy, that there is no human from the bookstore, or the net service, that can review all content so holding them responsible for content they didn’t review isn’t fair.
Similarly in this case: there is no human from OpenAI inside the mind of user U: it is solely their human agency involved in deciding to believe something. Is it fair to hold a human that isn’t involved in that process responsible for the decision of U? Is it because humans should be apriori viewed as too dense to take on agency? Is it just because they have deep pockets so its rationalizing they should be blamed, or since that must be the case or the lawyers have nothing to do?
As some prompt chatbots to do: “reason step by step”
There seems to be acknowledgement that a user U who types something into a post on Facebook is the human agent responsible for causing that content to be created. The software is just a tool. If a human uses a gun to commit a crime, its just a tool and the human is the agent responsible.
Yet when the tool is this AI: suddenly its claimed the human’s agency is removed and the tool is no longer a tool and its maker is claimed to be responsible. Don’t handwave. Reason step by step: as tools progress in complexity, exactly when and why does a change in a tool somehow make the toolmaker responsible for what is created rather than the person choosing to yield the tool?
It seems like its because its labeled AI and people subconsciously anthropomorphize and view the AI as an agent but then consciously realize it isn’t so they try to shift the agency over to the vendor instead. If that isn’t the flawed reasoning, then reason step by step and explain when and why the user loses agency and the tool vendor somehow acquires it.
There seems to be acknowledgement if a users types something to a self-serve website creation tool: they are the publisher of that information. If they use a sophisticated compiler to take their high level language code and produce an executable program: they are the publisher of that. If someone creates a programming tool where they specify things in natural language (as many have wanted for a long time) that is inherently ambiguous and therefore the results will be unpredictable to some degree (which is why thats taken time to evolve, to deal with issues of “framing” and implicit knowledge, etc).
Again: reason step by step, when and why does the user suddenly stop being a publisher and somehow the vendor of the software become the publisher if the tool being used is an AI chatbot? Compiler companies are the publishers of the compiler, not the programs that users create with them.
These dividing lines need to be reasoned out if there are claims that some line has been crossed. Its necessary to determine what this is applicable to: not merely the handwaving assertions I keep seeing implying “but we know this is a problem!” where there isn’t any reasoning as to why and what class of things would have this problem.
If the issue is whether the user should be absolved of deciding to treat the information as factual because it sounds that way, rather than treating it as if it could be the product of randomly typing monkeys, what exactly is the breakpoint for accuracy and why? Reason step by step. When is the human viewed as being unable to take agency for their choice and viewed as someone who is the victim of this AI system that tricked him into thinking it might be true? What is the class of things that are a problem and why?
Again: it confuses things to refer to publications created by other human agents. Reason from this case before trying to confuse the issue by using potentially flawed analogies from case law to try to avoid doing so.
If there are claims of design “negligence”: those need to be argued step by step and not merely by handwaving assertion by those who seem to know little about the technology who merely pretend its apriori the case without providing the reasoning. If there is supposed negligence, the exact reason and details need to be pointed out to determine the class of things that would fall within that.
The specific exact claimed type of harm from this is a new thing that just came into existence since these chatbots just arrived so it seems useful to step back and consider what the best way to handle it is for society, rather than merely what the easiest way is to use existing frameworks to deal with it.
There may be some analogy to existing things: but there are differences because it is a program generating this text rather than a human so there is still a chance it can be handled as a new thing rather than just stuffing it into existing frameworks. It seems appropriate to step back and consider from a societal perspective what the best way to tackle this harm is to consider overall the the impacts on society of different approaches. There are potential downside harms to the public if AI is squashed. I delved into that in a prior post above on the question of how this harm is discovered when it’d require mindreading to do so, and the implications of that issue.
Oh, and I should add in terms of the issue of the user U not being able to take on the risk for unknown person X who is potentially going to be libeled: the “risk” that U is taking on is the risk that they need to evaluate the information they receive and decide if its valid and make the choice as to whether to believe it and hence cause this “libel” to exist in their minds.
They are risking needing to be held accountable as the only human involved in that act of belief rather than trying to hold an inanimate tool responsible. The fact that its called “AI” doesn’t give these things agency, they aren’t truly sentient and can’t act as agents. The user U is assuming the risk of being the human agent involved in evaluating the information (even if people seem to have a hard time granting the human agency as if they aren’t mentally competent to do so). Whether the humans are skilled at evaluating information: they should have the choice to take the risk to be granted the ability to think for themselves. If they err and believe something that is “libel”: it was their choice to risk that possibility by using the tool that has risks of being wrong.
People seem to try to pretend the mere existence of a statement output by an AI that is false is a problem. Yet if a monkey randomly typing lots of things on a keyboard happens to output a string of text that contains a false statement: is that "libel"?
What makes it a problem is someone's belief in a text string, not its mere output by some process that isn't sentient and doesn't know whether or not it matches reality or intend it to.
If this is somehow different, how and when does the progression from a monkey to this case magically change to it being different? There are many on this site that just make mindless assertions that something posted is "wrong" without offering justifications for it. Such assertions should be viewed as useless with no reasoning backing them up: they are equivalent to the output of a random monkey typing if there is no reasoning backing them up. Maybe they are true: but that require justification and not the mere existence of the statement to determine.
I mean, it sort of is a moot point, because most monkeys are judgment proof.
David Behar, the crazy anti-lawyer guy who used to post here until Prof Volokh banned him, had as one of his bizarre notions the idea that defamation liability should attach to the people who believed the defamation rather than the people who issued it. RealityEngineer apparently suffers from the same hallucination (term used ironically).
See above. "You could have chosen not to believe it" does not in any way mitigate the liability of the speaker.
Again, this is wrong. There already was likely no liability for content they didn't review. (See Cubby v. Compuserve.) Section 230 immunized them from liability for content that they did review, as well.
1) "Tool" is not a legal term, and it's not a magic word that resolves these issues.
2) You completely misunderstand the way liability actually works. The manufacturer of a product (or "tool") that, when used, causes injury to third parties is routinely held liable for the damages to those third parties. It depends on the specific facts and circumstances, of course. But there is no general rule that "We just made the tool; someone else chose to use it. Therefore, we're not responsible."
3) Perhaps because you're an engineer, you think there must be a law of conservation at play here. You think that if we hold one responsible, we can't hold the other. But there's no such requirement. We can say that the user and the tool maker are responsible.
Because those who created the AI chatbot, not the user, determined the output. (But again: both the user and the vendor can be liable.)
Look at the Roommates.com case if you need some help with this. Although a website is ordinarily not liable for the content contributed by users, if the website participates in whole or in part in developing that content, it can be liable. In this case, the website required users through structured questions with a dropdown selection menu to provide specific information that was unlawful, so even though the users themselves provided that information, Roommates.com was not immune.
https://scholar.google.com/scholar_case?case=7987071093240934335&q=fair+housing+v.+roommates.com&hl=en&as_sdt=6,33
If a particular compiler has a bug in it that causes it, under some circumstances, to not compile code correctly, and harm is caused as a result, I don't know why you think the compiler company couldn't be liable.
re: "If a particular compiler has a bug in it that causes it, under some circumstances, to not compile code correctly, and harm is caused as a result, I don’t know why you think the compiler company couldn’t be liable."
I wasn't implying the case there was a bug. The problem is: there is no bug in this case. Merely something humans want: and are willing to use them even if they may get some things wrong. There is no bug in these systems: they merely don't have the "100% truthful" feature that some wish they had. So don't use it if you need that feature.
re: " that defamation liability should attach to the people who believed the defamation rather than the people who issued it. RealityEngineer apparently suffers from the same hallucination"
What you seem to struggle to grasp is: there is no human in this case that issued it. The only human involved is the human using the tool, which many here seem to constantly gloss over and attempt to hand wave away. That makes it logically different.
re: "See above. “You could have chosen not to believe it” does not in any way mitigate the liability of the speaker."
You seem to fail to grasp that you are talking about a system involving human content. The framework being created now can distinguish itself from the framework involving humans and make different choices, even if some of you seem to struggle to grasp the idea of thinking things through from basic principles rather than mindlessly reciting the current status quo as if it were apriori the best and only way to do things.
re: "2) You completely misunderstand the way liability actually works. The manufacturer of a product (or “tool”) that, when used, causes injury to third parties is routinely held liable for the damages to those third parties."
That doesn't always happen. A detailed argument would need to be made as to the claimed reasons it should in this case. Merely hand waving at it doesn't make the case. Far too often people ignore the reality that the devil is in the details. Software engineers are forced to think details through to instruct a machine that can't reason, unlike those that deal with other humans get away with hand waving with others who have the same preconceptions.
re: "Because those who created the AI chatbot, not the user, determined the output. (But again: both the user and the vendor can be liable.)"
But the chatbot didn't force the user to negligently believe it. The user signed on to be handed things that are a mix of fact and fiction and therefore should be held to be responsible for doing so.
The viewpoint being pushed here means that people aren't allowed to ever take responsibility for their own determination of fact or fiction. Thats absurd. Its creating a no win situation where only a 100% truthful system can be put out no matter whether users want it or not since you wish to not allow humans the ability to take responsibility for the contents of their own minds. Thats absurd.
Perhaps flawed analogies to other situations exist in the real world: but leading to an absurd result where hundreds of millions of people can't use something they want because of cases that needn't be applied to this situation is absurd.
re: "Look at the Roommates.com case if you need some help with this"
What you apparently need help with is grasping this can be new legal reasoning that is distinguished from prior cases because it deals with something new. I don't care if poorly reasoned prior cases lead to absurd useless results in this arena.
The amount of interest in these things means that its incredibly unlikely your simplistic worldview of them is going to be the way things play out, even if it takes new law to clarify that this is something different to those who struggle with adapting to it and with allowing people freedom to make up their own minds.
Use some imagination rather that mindless reliance on existing precedents.