The Volokh Conspiracy
Mostly law professors | Sometimes contrarian | Often libertarian | Always independent
Large Libel Models: An AI Company's Noting That Its Output "May [Be] Erroneous]" Doesn't Preclude Libel Liability
[An excerpt from my forthcoming article on "Large Libel Models? Liability for AI Outputs."]
AIs could, of course—and probably should—post disclaimers that stress the risk that their output will contain errors. Bard, for instance, includes under the prompt box, "Bard may display inaccurate or offensive information that doesn't represent Google's views." But such disclaimers don't immunize AI companies against potential libel liability.
To begin with, such disclaimers can't operate as contractual waivers of liability: Even if the AIs' users are seen as waiving their rights to sue based on erroneous information when they expressly or implicitly acknowledge the disclaimers, that can't waive the rights of the third parties who might be libeled.
Nor do the disclaimers keep the statements from being viewed as actionable false statements of fact. Defamation law has long treated false, potentially reputation-damaging assertions about people as actionable even when there's clearly some possibility that the assertions are false. No newspaper can immunize itself from libel lawsuits for a statement that "Our research reveals that John Smith is a child molester" by simply adding "though be warned that this might be inaccurate" (much less by putting a line on the front page, "Warning: We may sometimes publish inaccurate information"). Likewise, if I write "I may be misremembering, but I recall that Mary Johnson had been convicted of embezzlement," that could be libelous despite my "I may be misremembering" disclaimer.
This is reflected in many well-established libel doctrines. For instance, "when a person repeats a slanderous charge, even though identifying the source or indicating it is merely a rumor, this constitutes republication and has the same effect as the original publication of the slander."[1] When speakers identify something as rumor, they are implicitly saying "this may be inaccurate"—but that doesn't get them off the hook.
Indeed, according to the Restatement (Second) of Torts, "the republisher of either a libel or a slander is subject to liability even though he expressly states that he does not believe the statement that he repeats to be true."[2] It's even more clear that a disclaimer that the statement merely may be inaccurate can't prevent liability.
Likewise, say that you present both an accusation and the response to the accusation. By doing that, you're making clear that the accusation "may [be] inaccurate." Yet that doesn't stop you from being liable for repeating the accusation.
To be sure, there are some narrow and specific privileges that defamation law has developed to free people to repeat possibly erroneous content without risk of liability, in particular contexts where such repetition is seen as especially necessary. For instance, some courts recognize the "neutral reportage" privilege, which immunizes evenhanded reporting of allegations and responses in certain situations: "[W]hen a responsible, prominent organization … makes serious charges against a public figure, the First Amendment protects the accurate and disinterested reporting of those charges," even when the reporter has serious doubts about the accuracy of the charges.[3] But other courts reject the privilege altogether.[4] And even those that accept it apply it only to narrow situations: Reporting allegations and responses remains actionable—even though the report makes clear that the allegations may be mistaken—when the allegations relate to matters of private concern, or are made by people or entities who aren't "responsible" and "prominent."[5] It certainly remains actionable when the allegations themselves are erroneously recalled or reported by the speaker.
The privilege is seen as needed precisely because of the general rule that—absent such a privilege—passing on allegations can be libelous even when it's made clear that the allegations may be erroneous. And the privilege is a narrow exception justified by the "fundamental principle" that, "when a responsible, prominent organization … makes serious charges against a public figure," the media must be able to engage in "accurate and disinterested reporting of those charges," because they are "newsworthy" just because "they were made."[6]
Likewise, the narrow rumor privilege allows a person to repeat certain kinds of rumors to particular individuals to whom the person owes a special duty —such as friends and family members—if the rumors deal with conduct that may threaten those individuals. (This stems because from what is seen as the special legitimacy of people protecting friends' interests.[7]) This is why, for instance, if Alan tells Betty that he had heard a rumor that Betty's employee Charlie was a thief, Alan is immune from liability.[8] But the privilege exists precisely because, without it, passing along factual allegations to (say) a stranger or to the general public—even with an acknowledgement that they "may [be] inaccurate"—may be actionable.[9]
Now a disclaimer that actually describes something as fiction, or as parody or as a hypothetical (both forms of fiction), may well be effective. Recall that, in libel cases, a "key inquiry is whether the challenged expression, however labeled by defendant, would reasonably appear to state or imply assertions of objective fact."[10] A statement that obviously contains no factual assertion at all—as opposed to just mentioning a factual assertion about which the speaker expresses uncertainty, or even disbelief—isn't actionable.[11] But neither ChatGPT nor Bard actually describe themselves as producing fiction, since that would be a poor business model for them. Rather, they tout their general reliability, and simply acknowledge the risk of error. That acknowledgment, as the cases discussed above show, doesn't preclude liability.
[1] Ringler Associates Inc. v. Maryland Casualty Co., 80 Cal. App. 4th 1165, 1180 (2000).
[2] Restatement (Second) of Torts § 578 cmt. e; see also Martin v. Wilson Pub. Co., 497 A.2d 322, 327 (R.I. 1985).
[3] Edwards v. National Audubon Soc'y, 556 F.2d 113 (2d Cir. 1977). A few later cases have extended this to certain charges on matters of public concern against private figures. Others have rejected the privilege as to statements about private figures, without opining on its availability as to public figures. See, e.g., Khawar v. Globe Int'l, Inc., 965 P.2d 696, 707 (Cal. 1998); Fogus v. Cap. Cities Media, Inc., 444 N.E.2d 1100, 1102 (App. Ct. Ill. 1982).
[4] Norton v. Glenn, 860 A.2d 48 (Pa. 2004); Dickey v. CBS, Inc., 583 F.2d 1221, 1225–26 (3d Cir.1978); McCall v. Courier-J. & Louisville Times, 623 S.W.2d 882 (Ky. 1981); Postill v. Booth Newspapers, Inc., 325 N.W.2d 511 (Mich. App. 1982); Hogan v. Herald Co., 84 A.D.2d 470, 446 (N.Y. App. Div. 1982).
[5] A few authorities have applied this privilege to accurate reporting of allegations on matters of public concern generally, but this appears to be a small minority rule. Barry v. Time, Inc., 584 F. Supp. 1110 (N.D. Cal. 1984); Tex. Civ. Code § 73.005.
[6] Edwards, 556 F.2d at 120. Likewise, the fair report privilege allows one to accurately repeat allegations that were made in government proceedings, because of the deeply rooted principle that the public must be able to know what was said in those proceedings, even when those statements damage reputation. But it too is sharply limited to accurate repetition of allegations originally made in government proceedings.
[7] Restatement (Second) of Torts § 602.
[8] Id. cmt. 2. Another classic illustration is a parent warning an adult child about a rumor that the child's prospective spouse or lover is untrustworthy. Id. cmt. 1.
[9] See, e.g., Martin v. Wilson Pub. Co., 497 A.2d 322, 327 (R.I. 1985).
[10] Takieh v. O'Meara, 497 P.3d 1000, 1006 (Ariz. Ct. App. 2021).
[11] See, e.g., Greene v. Paramount Pictures Corp., 813 F. App'x 728, 731–32 (2d Cir. 2020). Even then, a court might allow liability if it concludes that a reasonable person who knows plaintiff would understand that defendant's ostensible fiction is actually meant to be as roman à clef that conveys factual statements about plaintiff. The presence of a disclaimer wouldn't be dispositive then. See, e.g., Pierre v. Griffin, No. 20-CV-1173-PB, 2021 WL 4477764, *6 n.10 (D.N.H. Sept. 30, 2021).
Editor's Note: We invite comments and request that they be civil and on-topic. We do not moderate or assume any responsibility for comments, which are owned by the readers who post them. Comments do not represent the views of Reason.com or Reason Foundation. We reserve the right to delete any comment for any reason at any time. Comments may only be edited within 5 minutes of posting. Report abuses.
Please
to post comments
It seems like new technology can occasionally benefit from new ways of framing issues rather than relying on old ways of looking at things.
It seems users should be granted some agency to grasp that they may receive output that isn't accurate so its their choice to view potentially problematic output. Then its their choice whether to pass that on to others, whether they validate it with other sources.
In the world of internet publishing, Section 230 protects sites from liability for the user generated content they host. Pragmatically its an acknowledgement that its like the offline case of a bookstore that can't pragmatically be responsible for the content of every publication it sells. It seems by analogy pragmatically the creators of AI systems shouldn't be viewed as responsible for every output of the program which they can't predict.
Obviously there are ways to frame the issue to try to hold the software makers accountable: but doing so seems counter productive, and like it'll undermine a useful new niche of software for little reason other than lack of imagination and a desire to rely on simplistic analogies as if they necessarily had to be the ones used to look at the issue.
Imagine what would have happened to the internet if Section 230 hadn't been enacted and if people also hadn't acknowledged thats really just framing the issue using a rational analogy to a bookstore. Perhaps this is more of a leap to a different way of viewing things: but it seems a crucial one to consider making.
Is Microsoft held responsible for all web pages content created using Microsoft Word? Microsoft doesn't put the content into Word documents: even if their software produces the output document thats problematic to be produced that goes on the web for others to view. OpenAI doesn't put in the prompt that causes a problematic thing to be generated, even if their software generated something problematic.
Yes: there obviously is a more predictable correlation between the input to microsoft word and the output, unlike with AI. The issue is still that a user chose to have a program create output. No one is forcing someone to enter text into a chatbot: they do so at their own risk and they choose what to do with that output.
Were companies that sold compilers for the C programming language held responsible for the outputs of programs people created using prompts (programs) they put in? This seems by analogy like trying to hold gun manufacturers liable for the uses to which people put it.
If you owned a monkey and rented it out to someone who had it randomly typing characters and it accidentally eventually typed coherent content that was problematic, should you be liable for that accidental unintentional result?
With the monkey? No, because no reasonable reader is likely to view the monkey's random output as a factual assertion.
But say I had a device that I publicly touted as providing highly reliable answers -- more effective than what many humans provide -- to various questions (e.g., touting its performance on the SAT, on the bar exams, and more), and I started licensing it for use in various applications that require useful output, not random monkey output. At that point, if the device provides statements that are false and damage a third party's reputation, and indeed includes purported quotes and citations to purported New York Times article that make it extra reliable, can I really defend myself by saying, "Naah, it's just a monkey?"
As to "It seems users should be granted some agency to grasp that they may receive output that isn’t accurate so its their choice to view potentially problematic output. Then its their choice whether to pass that on to others, whether they validate it with other sources," do we say that about newspaper articles? Surely people grasp that they may read newspaper articles that aren't accurate. It's their choice to view them. Then it's their choice whether to pass that on to others, and whether to cross-check the article with other sources. Is that a defense when a third party mentioned in the article sues the publisher for damage to his reputation?
For more, see this post.
OK, the point was clear that a statement that "this might be wrong" leaves liability in place.
Is it relevant or irrelevant in libel law how the statement would appear to a reasonable reader? A reasonable person coming to GPT-4 for the first time after reading that it passed the bar exam might be inclined to believe it, but not after the first day of using it. If I see it produce output that someone was convicted of a felony, my opinion of that person will not go down until and unless I get trustworthy confirmation.
This case (https://law.justia.com/cases/federal/district-courts/new-york/nysdce/1:2019cv11161/527808/39/) is not quite on point, since ChatGPT's output when not asked to write fiction is phrased as factual assertions.
And, is it not the case that someone well known to be a liar can still be sued for defamation?
There could be a long fascinating discussion about how much control the makers and operators of these chatbots have over the output. They could always pull the plug, but short of that no human being understands how LLMs arrive at the output they provide except in uselessly general terms. If the makers/operators cause harm to someone's reputation by failing to add fact checking to the LLM, is that more appropriately considered defamation, or negligence?
Nerdy Fred, how does the output of the LLM reach an audience? Is it distributed by a publisher? If so, it is the publisher's responsibility to prevent distribution of defamatory errors. Sue the publisher.
That makes sense for the case of re-publication. I believe Professor Volokh has pointed out that if only one person sees a defamatory statement it is even then still defamation.
Still not how the law works. (Even without § 230.)
re: " do we say that about newspaper articles? Surely people grasp that they may read newspaper articles that aren’t accurate."
The difference is: newspaper articles have a human agent that created them. In this case: the human agent most directly involved in causing a chatbot to create information is the user that prompted it to do so. The chatbot is a tool: like MS Word, a gun, etc. The human agent using the tool should bear the responsibility for risk of using that tool and any consequences with the output. No AI provider is guaranteeing this information is 100% accurate: even if they try to play up its utility. There is no warranty of accuracy the person can rely on: even if some might misguidedly do so. We can't always protect humans from themselves: they should be allowed to use things that are potentially dangerous, whether drugs or AI systems. I personally think recreational drugs aren't something I wish to risk doing, but others should be free to do so and bear the consequences.
It doesn't seem useful to decide: "humans are too stupid to grasp that the AI might be wrong so we need to ensure AI is 100% accurate before we release it!". Its the "precautionary principle" at work, progressive "safteyism" trying to protect people from themselves for their own good since we think they are too dense to take agency themselves. If we require 100% safety: this will be held back years. Its unclear what sort of "opportunity cost" that would indirectly have on society in terms of lost productivity, inventions, and perhaps new medicines that'll cure people, etc. Yup: there may be some harm done from a not fully "safe" product being out there: but the thing to do is to grant agency and responsibility to the users of this tool. Its seems likely policy wise its counter productive to go after the companies creating this tool.
We can’t always protect humans from themselves: they should be allowed to use things that are potentially dangerous, whether drugs or AI systems.
You seem not to grasp that your comment is entirely consistent with a practice to assign to the publisher who distributes the output of the unreliable content generator the danger of responsibility to make whole any third party damaged in consequence of that distribution.
I get that all these discussions will seem incoherent to most internet fans so long as Section 230 distracts their attention from long-customary standards of shared liability between publishers and their contributors. Problem is, the customary standards are based on what actually happens, and Section 230 is delusive.
Bookstores aren't held responsible for the content of every publication they carry. Section 230 is merely codifying the same common sense approach of not expecting an entity to be responsible for all content that it can't possibly screen. Should you now hold phone companies responsible for the conversations held over them? It seems you are driven by a desire to find deep pockets to hold responsible for content rather than the agent that actually chooses to create or knowingly pass along a piece of content they have evaluated.
In the case of AI: any "distribution" or "publishing" would be done by the user of the AI. The AI is a tool someone chooses to use: what they do with the content is their choice. Yes: some tools are dangerous, like guns and some involve risk to the person using them like a motorcycle or other dangerous hobby, or drugs that might be contaminated.
Reality Engineer
Someone owns https://openai.com/blog/chatgpt. If we use a metaphor involving bookstores, publishers or readiers, the owner of https://openai.com/blog/chatgpt is not the owner of a "bookstore", they are a publisher and/or newspaper .
The person who visits the url and asking a question is reading a book or newspaper.
ChatGPT may be a tool. But it's a different sort of tool than MS word. MS word doesn't generate content to fill pages.
It is software that prints out words just as many other types of software do like search engines. It happens to be run via the web: as some versions of Word are now. OpenAI doesn't directly screen or distribute or endorse the words that ChatGPT the way a publisher endorses and distributes words they publish. Just applying the word "publisher" to it doesn't make it so. Yes: its a complex tool, but you choose to use the tool... or not. You choose to risk what the tool outputs. Or you can avoid it. Unfortunately some people wish to deprive others of the ability to use this tool since they label others as too stupid to be allowed to use it since they might believe the information is accurate despite no one claiming it is.
RealityEngineer
Of course. But the truth remains the truth when it is uttered. You can't make something be not green by saying "Just applying the word green doesn't make it so".
Yes. It's a tool. And someone used it. But a person who asked a question doesn't force the tool to create a false and libelous answer. Those who developed the tool actually fashioned a tool that formulates its own response.
.
I a robot (i.e. tool) was programmed to kill someone when who said "Can you fix me an omelette?" and the person asking that didn't know the programming did that, the creator of the robot should have some liability for the robots actions. And that is true even
(a) though the non-programmer user freely chose to use the tool and was perfectly free to not use the tool
(b) if the programmer or robot developer wrote a warning statement saying the robots behavior could be unpredictable and unreliable in some way and
(c) the program is complex, so the programmer didn't actually know that request would result in the program going out and killing someone.
No. People just want liability to fall on product designers/manufacturers/ suppliers who create products that cause something considered to be "harm".
re: “Of course. But the truth remains the truth when it is uttered. You can’t make something be not green by saying “Just applying the word green doesn’t make it so”.”
You’d need to actually make a case that its a publisher not merely assert it as apriori “truth”. A publisher is normally an entity where humans have the ability to screen content and take responsibility for it: and there is no such thing happening in this case, just as no human staff filters all the content for a social media cite.
re: ” But a person who asked a question doesn’t force the tool to create a false and libelous answer. ”
No: they ask the tool to come up with a response. They do so knowing that it may create a false answer. It is their choice to do so: they are the human most proximately responsible for causing the AI to generate its output. They could choose not to. But some folks for some reason wish to let them off the hook and blame the tool for doing what it should be expected to do: provide a potentially useful but not guaranteed to be 100% accurate response.
re: ” was programmed to kill someone when who said “Can you fix me an omelette?” and the person asking that didn’t know the programming did that,”
Nothing in that is relevant. The programmers (unless you have some evidence I haven’t heard of) never intentionally programmed these things to create false results. In your analogy, it would be that the user knew it *might* kill someone if the robot were activated and they did so anyway and then tried to weazel out of taking responsibility for their actions.
re: “No. People just want liability to fall on product designers/manufacturers/ suppliers who create products that cause something considered to be “harm”. ”
Except in the real world tools can cause harm and products can involve risk, like say a motorcycle. People choose to buy motorcycles despite the risks, rather than say demanding a motorcycle somehow guaranteed not to let the rider come to harm at whatever $millions it would cost if such a thing were possible.
Humans can make mistakes also, and AIs will for quite a while. They will also be useful tools that will help create new medicines and accomplish other useful things that may lead to much good being done: unless those obsessed with safetyism and the “precautionary principle” paternalistically decide that humans are too dense to be allowed to take responsibility for running tools that might not be 100% accurate.
I’d suggest those trying to lead the world down that path are the one that will do harm, whether they intend to or not.
Sure. And if you were defending them you would need the actually make the case they are neither the publisher, the author, nor responsible for the actual content. You can't just stomp your little foot and insist they are "like" a search engine when they do things search engines do not do.
Well, I'm an engineer, not a lawyer. But I've heard tell that judges sometimes consult something called a "dictionary" to decide wha a word means. And the Meriam-Webster says a "publisher" is "one that publishes something" and "to publish" means "
There is nothing about "checking" or "humans" in that definition.
Yes. They know this just the same way someone knowing a plane could crash. And they get on the plane anyway. Their knowing this is possible does not prevent the airline, manufacturer or designer from having liability. You seem to want to create some sort of indemnity that doesn't exist for other tools-- like flying machines, guns, blenders or even hammers.
Engineers who design airplanes don't intentionally design poor fuel injection systems. But lack of intentionality on their part doesn't shield them or their employers from liability if the fuel injector fails mid air causing an airplane to crash.
And if their testing casues them to know the injectors periodically don't work, they are also going to be liable for the crash.
So the programmers lack of intentionaloty when designing AI type tools that cause a harm isn't a defense when applied to other tools.
No. We know that right now the people who have deployed ChatGPT at their web site know it can create utterly false statements that could ruin someones reputation. Whether they knew this before it was deployed we can't be sure. (Though I think it's safe to say they've known it 'hallucinates' for sometime now). Regardless, They certainly know it now. Yet, knowing this continue to make it available to the public who are not specialists and not spending much of their lives testing.
So the user doesn't know the "tool" could make stuff up. In my analogy: it's the creator-supplier who knows the bot migh kills. Not the user.
Sure. But you don't seem to grasp the distinction between a harm that happens owing to the faulty design of the motor cycle and a harm that happens for some other reason. In the case of ChatGPT making up false information-- if ChatGPT creates the information, that is ChatGPT's faulty design.
And humans being able to make mistakes doesn't indemnify them from responsibility if they act recklessly. So there is really no reason why AI (or more correctly it's owners) should be indemnified from responsibilities for it's mistakes that happen due to reckless behavior.
You mean the path of holding engineers and designers down the path of being responsible for the harms caused by their designs? We went down that path long, long ago. We did so because it's the correct path. Trying to carve out some special rule for one tool (AI) that is different from the one that applied to other tools is silly. The "tool rule" you want to apply to any any other tools you've tried ot introduce to defend the rule-- not motor cycles, not airplanes, not guys, nothing. There no rule why it ought to apply to AI.
Honestly, it's hard to understand who anyone who includes "engineer" in their handle would think your "tool-rule" should apply to any tool!
re: "And the Meriam-Webster says a “publisher” is “one that publishes something” and “to publish” means"
If you'll note: in the context of Section 230 the issue is that social media companies are *not* considered publishers. A high level definition isn't whats at play: its the specific usage of the term in this context. The use of "publisher" in the context being discussed here is the entity taking legal liability. Context matters.
re: "Yet, knowing this continue to make it available to the public who are not specialists and not spending much of their lives testing."
Yes: because they grasp that the public shouldn't be treated as incompetent children unable to consent to taking risks. They let the public decide to take the risk knowing it can hallucinate.
re: "So the user doesn’t know the “tool” could make stuff up."
That is something you are making up since all the coverage of these things include that reality. You are trying to protect some theoretical minuscule truly dense portion of the public who is likely therefore confused in general about what is accurate or true about the world.
re: "Sure. But you don’t seem to grasp the distinction between a harm that happens owing to the faulty design of the motor cycle and a harm that happens for some other reason. In the case of ChatGPT making up false information– if ChatGPT creates the information, that is ChatGPT’s faulty design."
That isn't remotely true. The fact that something isn't an AGI and incapable of making a false statement isn't "faulty design". Its an acknowledgement that the state of the art doesn't allow the creation of perfectly accurate AIs and won't likely for quite a while and yet the public is willing to take the risk to use them, if paternalistic authoritarians don't prevent them from getting to make that choice.
re: "So there is really no reason why AI (or more correctly it’s owners) should be indemnified from responsibilities for it’s mistakes that happen due to reckless behavior."
Any reckless behavior is on the part of those who take what the AI chatbots say as 100% accurate despite no claims they are except in the minds of those that wish to hold them to fanciful standards not useful in the real world.
re: "Trying to carve out some special rule for one tool (AI) that is different from the one that applied to other tools is silly. "
Again: it isn't a different path. Competent designers of products aren't held responsible for designs that aren't 100% safe since that standard is impossible in the real world. Yes: the system has erred at times due to emotional pleas to hold them to irrational standards, but that isn't the reasoned approach (even if many ambulance chasers and progressive luddites would like that).
Thoughtful, thank you.
There is the possibility that the very bright people building and running LLMs could find a way to keep delivering the benefits while not producing defamatory output so easily. Lawsuits could provide the incentive to invent a fix.
They already have an incentive to find a fix since of course everyone would prefer fully accurate content: the issue is that it may or may not be a simple thing to come up with. People find utility in their ability to "imagine" and create content that hasn't existed before. However that very ability leads to "hallucination" since it can't tell what is real and what isn't since it isn't reasoning about it. Unfortunately the way these things are constructed inherently leads to the creation of content that may or may not be real: and they don't have the ability to "reason" in the way people think to assess it against reality. Even humans argue over what is real and what is problematic: why should it be easy to create a computer that can do so?
Unfortunately trying to reduce any risk of them producing anything flawed is more likely to lobotomize them to be not as useful, throwing the baby out with the bath water. Maybe there is something in the works the public isn't aware of and it is easier than expected: but unfortunately that can't yet be predicted. Unfortunately as Alan Kay said: "the best way to predict the future is to invent it" and no one has yet invented it and they don't know enough to be sure if/when they can.
RealityEngineer
What's the point of this rhetorical question. The fact that it's not easy to create something doesn't erase liability. It's not "easy" to build functional suspension bridges that span long distances. It wasn't "easy" for the Wright brothers to create their first flying machine. It's not easy to create an artificial heart. Engineers don't normally think the fact that creating new technologies is "not easy" means you get to escape liability if you create, sell and market something that doesn't work!
And if civil, chemical, mechanical, electrical, metallurgical and so on engineers thought they could escape liability because "it's not easy" to develop nice things, lawyers and judges would set us straight. (As well they should!)
re: "What’s the point of this rhetorical question. "
If humans can't determine what is real, why should AI be held to a higher standard as if it were practical to create such a thing?
In the real world no product is 100% safe so its unclear why you analogies to other products that can fail despite best efforts. Not one of the sorts of products you refer to are perfect.
In the real world: humans wish to take the risk to use products that aren't 100% foolproof. If I had a disease incurable by any means I'd take an unproven treatment and be rather upset at anyone trying to say "its not 100% foolproof so you can't use it, even if you die without it".
In terms of whether something is "easy": I suspect more total intellectual effort over the decades have gone into creating what is necessary for these LLMs to do what they do than went into any of the products you refer to before they were first used by the public.
RealtyEngineer
Of course they aren't perfect. But the point is: their designers, owners, manufacturers and sellers aren't indemnified for the harms they cause as a result of their imperfections. You seem to think that AI should for some reason be indemnified from liability for the harms caused by its failings. That's utterly unique protection for the makers and designer of a tool.
Sure. And medical care and devices do have special consideration precisely because the person who is using it certain or likely to die without it. That's a reason.
But no one is going to die because they can't access chatGPT. So there is no reason to give it the sort of latitude we give urgent medical treatment.
Wow! You think airplanes, jet engines, steam engines, pigments, nuclear reactors, vaccines, microprocessors, computers aren't the result of decades of intellectual effort by numerous people? Yeah... humans went from the stone age to the computer age in the blink of an eye. You betcha!
re: " But the point is: their designers, owners, manufacturers and sellers aren’t indemnified for the harms they cause as a result of their imperfections. "
Again: in the real world no one holds products to a standard of 100% infallibility since that isn't possible. These aren't "design flaws": its a product that doesn't live up to your imaginary standards that no one claims it does.
re: "But no one is going to die because they can’t access chatGPT. So there is no reason to give it the sort of latitude we give urgent medical treatment."
Actually: in theory there may be situations where someone might die if for some reason they only have a locally run AI and no access to communications and the advice they get is better than nothing.
In addition: progress in AI for one application often advances it for others like the discovery of new medicines. Shutting down AI in one arena will hold back progress in others.
re: "Wow! You think airplanes, jet engines, steam engines, pigments, nuclear reactors, vaccines, microprocessors, computers aren’t the result of decades of intellectual effort by numerous people?"
The examples you used were things like an artificial heart, etc. I didn't say all products: I was referring to the examples you listed in the prior one whose development I've read about in the past and subjectively weighed the massive teams on these AIs.
RealityEngineer,
I (and no one here) has suggested any tool be held to the standard of 100% infallibility! So you repeating that after each time I do not hold anything to that standard is pretty silly.
Products are normally held liable for harms they cause do to the way the operate (or fail to operate.) You are the one trying to get around this general rule.
Perhaps, in some hypothetical case this could occur. In which case, that defense could be brought up in court. If, in the actual case at hand, use was not required to save a life, the notion that it might have in some other situation is not going to reduce the liability.
In the specific examples of defamation being brought up at law blogs, AI is not going to be able to prevail on that defense because it's use was not required to save a life.
No one has proposed shutting down AI. The discussion is about holding it accountable for the harms it does-- just like we hold other tools and their makers accountable for their harms.
There is no "massive team" defense against liable. And for the record, OpenAI, developer of ChatGPT, has 620 employees. Even if every single one is a developer, that not a "massive" number of people for an engineering company.
Who says LLMs have to be 100% accurate? The same people who say aircraft have to be 100% safe?
Prudence and diligence commensurate with the harms is normally enough to defeat a tort claim. The human feedback that makes it say "As an artificial intelligence, I am not required to help with that" might be strengthened or broadened to make the likelihood of defamation nearly zero. That might have other trade-offs (say, refusing to detail what Charles Ponzi did), but it would prevent defamation liability.
re: "to make the likelihood of defamation nearly zero."
In terms of utility: it may be that what you are asking for is the equivalent of asking for an airplane that is 100% safe. Unfortunately: no one knows yet since it hasn't been done. These things don't have the same sort of internal model of the world as humans or engage in the same sort of reasoning to validate their views against reality. The training isn't as simple as it might seem: at least at the moment it appears that way, it may be someone has already invented a way to deal with this, or will in a day or a month: or perhaps it'll take years or decades. It isn't yet known.
In the meantime: do you not allow people to take risks? Should we stop people from climbing mountains, riding motorcycles, doing recreational drugs, etc?
Creative output from humans often involves imagining things that don't exist like a work of fiction or a business plan for something that hasn't yet been done. Its not a simple task the way these LLMs are built to separate what is real from what isn't. Humans sometimes have difficulty learning to do so.
Reality Engineer,
You seem to be under the impression that the fact that airplane aren't 100% reliable helps you.
Airplane designers, manufacturers and operators don't escape liability on the theory that it's impossible to make a plane that is 100% safe! When airplanes crash (or have other malfunctions), the cause is investigated. The manufacturer can be liable if it was a fault on their part. Or the owner could be liable if the inspection and maintanance was not up to snuff. Or the pilot and crew could be liable and so on.
.
Payouts and settlements can be large. So yeah: airplanes aren't 100% perfect and problems can arise. And people can be liable for those problems. No real engineer argues "the plane is just a tool". or "people got on the plane knowing they sometimes crash." or "holding airplane designers and manufacturers liable will impede the progress of better designs in aviation".
re: "You seem to be under the impression that the fact that airplane aren’t 100% reliable helps you."
In the real world: there are standards for what is considered an acceptable level of analysis and testing and risk. Engineers aren't held responsible for an infinite amount of risk analysis. Nothing is perfectly safe and no rational person in the real world can expect it to be. Yes: there are design mistakes that people are held liable for because they should have been foreseen by a reasonable competent engineer.
There is no 100% safe airplane in existence. Yes: the system is broken enough that sometimes they go overboard with finding liability: but many people consider that a bad thing. Even those that approve of the current system for the most part acknowledge there are limits and 100% safety isn't viable or possible.
RealityEngineer:
You seem to be overlooking the fact that what the AI guys are calling "hallucinations" are forseeable bu those competent in AI. In fact, AI guys are assuring us they are inevitable and unavoidable!
And even if they were unforeseeable at sometime in the past, the possibility the bots create possibly defamatory statements has now been observed. Those hosting the current bot at their current site know it will continue to 'hallucinate' and some of that content could be defamatory.
So those people certainly can't use the "who'd a thunk?" defense for future statements. I mean, come on. I know there was a time engineers did not for see things like bridges "galloping" (like Galloping Gertie), but once that bridge galloped itself to destruction, engineers couldn't continue making bridges like that with the excuse that 'Awww... well... Who'd a thunk?"
(And previous bridges that might be prone to the issue got retrofits!)
re: "You seem to be overlooking the fact that what the AI guys are calling “hallucinations” are forseeable"
Yes they are forseeable, so what? Thats the whole point: the users take that risk also and therefore grasp that the output can be false, therefore it isn't libel and this whole attempt to squash their use falls apart.
So that's what makes them potentially liable. If it weren't foreseeable, it would be a different story.
Again: wrong people to focus on. Users aren't the victims. Those being defamed are. I don't take the risk that ChatGPT will say something false about me when you decide to use the service.
The fact that it hallucinates is a foreseeable risk that users take when they use it. Its unclear then why that is relevant, other than to those who think the public isn't capable of grasping that what they see isn't guaranteed to be a "fact" and therefore not libel.
Agreed. So why do you keep saying it?
I reiterate: I am not a user. So I don't take that risk. You don't get to take risks for other people.
This analogy shows how you keep misunderstanding the issue by focusing on whether the user assumed the risk. But the user isn't the one harmed in the ChatGPT example; the person being defamed is.
I should have said the tech could perhaps be held back decades likely, not merely years (implying perhaps only a few) if the idea of holding AI software vendors/services responsible for the content AI creates takes off. I may be wrong, but many like Meta’s head of AI Yann LeCunn consider the LLM approach used now to be inherently prone to hallucination without an easy fix. Perhaps a creative one may arise within days or months rather than decades: but the issue should be debated considering its possible that this would hold it back from the market for some unpredictable but potentially long period.
It seems like there will be a natural temptation among progressives to find an excuse to go after deep pockets to hold them responsible for content to help them push for “safe” lobotomized woke AI. I’d suggest that libertarian-leaning people at least step back and be very certain sure their take on the way to look at this is accurate before potentially helping them derail useful tech. I can understand a temptation to jump in quickly to get the debate started: but some paths might spread ideas that’ll really impact the tech.
I should note that I’m coming from the tech&biz world, I’m not an attorney. I’ve been concerned that regulation may hold back the tech: I hadn’t expected this type of concern would arise that might hold things back even without new regulations.
RealityEngineer, your shortcoming is not your technical background. It is your lack of insight into how publishing activities are organized, conducted, and funded. There is nothing inconsistent with business success or technological prowess in applying a real-world standard of responsibility to publishers. More the opposite, actually.
You seem to have glossed over my reference to bookstores as the analogy rather than publishers. You seem unaware of the vast quantities of information being produced that can't be screened by humans in a centralized fashion. If internet content sites and social media were viewed as liable for all the content out there: that would have strangled the growth of the net.
You seem to have glossed over my reference to bookstores as the analogy rather than publishers.
I did no such thing. I replied at length, and explained why the analogy is a bad one. You responded to that below, so I know you saw it.
I do not know whether there's a fix for hallucinations, but it's interesting that if you "argue" with ChatGPT its output improves. Total speculation, but maybe it will help to give it a component that argues with itself. We humans seem to benefit from that habit.
They do use human input to train these AIs. Its likely one reason that Google this week released their Bard system despite its poor reviews is because they are using the human feedback to train it. They can also have one AI help train another. The problem is that these things don't actually reason about the world in the way people think and so there are limits to what that sort of training can do. A "long tail" of potential problematic responses.
One speculative thread on the difficulties is here:
https://www.lesswrong.com/posts/D7PumeYTDPfBTp3i7/the-waluigi-effect-mega-post
I don't think it will be that bad. The cost of pursuing a court action and the relatively small damages likely to be provable will turn libel judgments into a cost of doing business. Newspapers live with that cost. There is a slightly bigger risk from injunctions. Having been found liable for calling Epstein a pedo, the AI company can be ordered not to do it again. Compliance may be technically difficult.
Newspapers have humans that at least try to get things right. The problem is these AI systems don't know what is "right" and the way they are created its not easy for them to know. There is a reason that people have found it easy to find problematic content: which suggests the risk of a massive amount of it with an unpredictable level of consequences if they are held responsible for it. I suspect the issue is that no one has started going after them for this since they grasp it can make mistakes and aren't trying to hold the AI creators responsible for it yet: but that'll change if the legal world decides they can go after deep pocketed big tech ventures. The most at risk of course then would be the small startups it'll squash.
That might make a good argument to use when lobbying Congress for a section 230 variant that protects AIs. We are in an arms race, and whoever dominates AI will dominate the world. We dare not let China take that position away from the USA. Therefore, we must protect AI developers from legal obstacles.
Are search engines held responsible for showing problematic content in a search result or leading to such a page? (I guess thats relevant to cases at the Supreme Court now). They produce output that the search engine creator can't predict in advance since it depends on whats put into them.
I'll add: humans can't always easily agree on what is "true" in general, or what is legally problematic content. It seems a rather high bar to expect an AI to be able to determine that accurately: leading to drastically limiting the content if there is risk if it gets it wrong.
Society will need to adapt to changes that are coming that may overwhelm the old ways of looking at things. Unfortunately in a world where lots of fake content will be produced: it seems like the only content that should be taken as potentially "real" and hence subject to determining potential liability for it is that which a human actually claims they are vouching for and taking responsibility for. It should be viewed like an opinion: not a claimed "fact" unless someone does so explicitly. Yup: that unfortunately may mean anonymous unclaimed problematic content that people will need to learn how to ignore and not take as "real" unless its verified elsewhere.
Actually, when search engines engage in shadow banning and suchlike they diminish their claim that they are not responsible for the results.
You mention Section 230, which is unproblematic on its face and in original intent, but has turned into a real mess because of the way in which it has been twisted by the courts.
That said, I am unpersuaded that any reasonable person will think that anything ChatGPT says ought to be believed without further checking, no matter how much like a human's statements its output can be made to look like.
While I often disagree with the targets of shadowbanning: it merely means they have chosen to take responsibility for some content, just as a bookstore owner might happen to read a book and decide not to carry it, or respond to customer protests that a book isn't worthy of being carried. That doesn't mean they are suddenly responsible for the rest of the content they pragmatically can't deal with.
"... it merely means they have chosen to take responsibility for some content..."
Nope. They are doing no such thing. Again, Sec 230 has been twisted to relieve Google etc. of any responsibility for exactly that content you imagine they have taken responsibility for.
Yes that is what they are doing. If they shadowban or whatever they have been alerted to the existence of some content and chosen to do something about it. That doesn't mean they are somehow magically then supposed to have known about all other content out there. Its a rational standard based in reality, not a twisting of Section 230. The "twisting" is the distorted thinking of those who don't understand the issue well enough to think it through clearly.
No; Sec. 230 was expressly designed to relieve Google etc. of such responsibility. No twisting.
It was designed well before Google existed of course, and it was designed to take a rational approach to what is viable to do and what isn't without undermining the existence of useful services on the web. I've never seen one person who whines about it convey any sort of rational alternative that would have allowed the net to usefully have grown to what it is, or explained how it was different than a bookstore and similar analogies. It seems to be driven by those that mindlessly hate big tech and have never been involved in a tech startup or thought about how one would deal with the issues involved.
Yup: that unfortunately may mean anonymous unclaimed problematic content that people will need to learn how to ignore and not take as “real” unless its verified elsewhere.
RealityEngineer, give me your real name and address, and a brief biography so I can access your previous places of residence. Without legal constraints on what I am permitted to do, I can use that information to create a publication which will destroy your life. Strangers will shun you, and no one close to you will ever fully trust you again. After you die, your descendants will rue their involuntary taint, and hope others do not discover it.
You have not thought these issues through.
Your example isn't thought through. You are presumably a human and you would be held responsible for your knowing actions.
The point was: there will be a vast amount of content that can't be attributed to some human creator. Just as people created spam filters to try to filter problematic spam, there will be work to create filters for other sorts of problematic information.
I didn't say there wasn't a problem to address: my point was regarding the reality that humans will need to adapt to the reality of problems that aren't that simple to solve. Holding the creators of tools responsible is the same mindset of those who wish to hold the makers of guns responsible for the actions of those who use them.
The point should be to track down the actual human responsible: rather than give in to the temptation to attack an easy target that just happens to be visible but isn't really responsible.
Part of adapting to the rise of new technologies may require humans being aware that unfortunately there will be misinformation out there and that they need to grasp that they shouldn't take information as being accurate without some validation method: like a human vouching for it as being accurate in which case they are held responsible for that claim.
Imagine what would have happened to the internet if Section 230 hadn’t been enacted and if people also hadn’t acknowledged thats really just framing the issue using a rational analogy to a bookstore.
The analogy to a bookstore is not rational. As a means to create damage to third parties, the activities practiced by a bookstore are not commensurate with the activities practiced by publishers.
The error—as so often in these discussions—stems from misunderstanding of what publishers do. To assemble and curate a world-wide audience, then distribute to that audience by means which users can cross-check and index by a keystroke or two, which in effect creates an indelible record of the damaging publication, is activity which creates thousands of times more potential damage than the activity of selling a book with the same content, or even than selling a typical press run of that book.
And keep in mind, the book’s publisher remains liable for defamation. So the bookstore comparison is misleading and inappropriate. Passage of Section 230 was a legislative blunder precisely because it was based on such widespread misunderstandings.
You seem poorly informed about the relevant issues, some history:
https://apnews.com/article/us-supreme-court-technology-social-media-business-internet-eb89baf1fa30e245c030992b48a8a0ff
"WHERE DID SECTION 230 COME FROM?
The measure’s history dates back to the 1950s, when bookstore owners were being held liable for selling books containing “obscenity,” which is not protected by the First Amendment. One case eventually made it to the Supreme Court, which held that it created a “chilling effect” to hold someone liable for someone else’s content.
That meant plaintiffs had to prove that bookstore owners knew they were selling obscene books, said Jeff Kosseff, the author of “The Twenty-Six Words That Created the Internet,” a book about Section 230."
The bookstore analogy is precisely where this came from.
RealityEngineer, I do not dispute that the bookstore analogy—and other inapt analogies—were influential in the passage of Section 230. It was, after all, reliance on those inapt analogies which made Section 230 the legislative blunder it has turned out to be.
However, you are mistaken to suppose there is some controlling Supreme Court decision about, "someone else's content," which applies generally to publishers. I have already explained to you the reasoning behind holding publishers jointly liable with contributors for damages to third parties. That is reasoning the Supreme Court has endorsed in case after case, for more than a century. It is reasoning which the Court continues to apply to legacy media today. It is reasoning relied upon in the ongoing Dominion case, which threatens the future of Fox. It is reasoning to which you have not devoted a syllable of your reply to address.
If you want seriously to discuss Section 230, and internet publishing more generally, you cannot just ignore that Supreme Court history. Still less can you insist that publishers have nothing to do with defamation. You would be foolish to insist that if publishers' activities inflict damages on third parties, that is something that the good of the public demands that we just ignore.
If you propose to tell defamation victims, "too bad about your ruined reputation, crippled business, and devastated family life," what do you think will happen? A great many members of the public will suppose you intend to target them likewise, and demand instead that their own welfare be treated alike, and counted integral to the public interest, instead of being excluded from it. What could you say in reply which would not redouble the resistance?
To continue arguing as you are will put you in the posture of an internet utopian, pointlessly demanding that the law deliver outcomes you prefer, based on practical means which no one on earth has ever enjoyed capacity to organize. Demands by internet utopians prove self-defeating when the powers they demand conflict with the only means available to deliver them. And that always happens, because the utopians never bother to educate themselves about the means available to deliver those powers, about how they work, and about what their limits are.
It is not because the rules are bad and ought to be fixed that internet utopians become frustrated. It is because what they demand is impossible to do, and they do not know enough about the means required to understand that. Today, that is you in a nutshell.
re: ", I do not dispute that the bookstore analogy—and other inapt analogies—were influential in the passage of Section 230."
They aren't "inapt": it was the obvious analogy even before Section 230 existed. It seemed to be basically codifying the way most people who grasped the issues thought of things.
You keep pulling in the issue of "publishers": when the whole point is that content providers, social media companies, etc. aren't "publishers" regarding most aspects of user generated content unless they take on that task.They are more aptly thought of as more similar by analogy to physical or wireless hardware networks that transmit information from one source to another regardless of what that content is. For most purposes they are and can be content neutral: just like search engines.
Any ways in which they deviate from that analogy need to be reasoned about separately from the default case where they aren't similar to publishers, but more to bookstores or network layers.
re: "publishers have nothing to do with defamation"
I haven't said any such thing: since I haven't equated these entities with publishers.
re: "posture of an internet utopian"
The "utopians" are those that assume that they will be able to prevent the creation of false content by regulation or legal liability incentives. Its akin to those who think the war on drugs will eliminate drugs, or that the war on alcohol could have eliminated alcohol if only it were pursued more vigorously.
My point is that just as the world needs to adapt to the reality of the existence of alcohol and drugs: for better and worse, it will need to adapt to the reality of the existence of anonymous flawed content. Magically wishing we lived in some utopian reality where it could be wished away isn't going to work: the tech is too far along and widespread for pandora's box to be closed.
The world has always had a problem with misinformation: this merely exacerbates it. The world needs to adapt to not take information seriously as a "fact" unless its claimed as such by some human (known or anonymous).
re: "because the utopians never bother to educate themselves about the means"
I'm coming from the background of those in the tech biz world that need to implement things in the real world. I was around the commercial net world as it launched throughout the 90s, even before Section 230 and its those who think its concepts weren't crucial to its rise who seem to not be well educated on the topic. Its those who think that centralized human filtering of all content would have been a viably useful approach (given the low quality of AI back then) who aren't thinking through how things would be implemented. Its those who don't know much about AI tech who don't grasp the difficulty with taming it.
It's correct. That you don't understand that is because you arrogantly think that having published some rag in Idaho that nobody read makes you an expert on anything.
The libel angle comes from people treating a statistical clockwork as a person.
The libel angle comes from real damages inflicted on third parties who did not operate any part of the mechanism.
Does that mean Psychology Today 1986 article author Robert Longo and Justice Anthony Kennedy can be sued for libel by over a million United States citizens? I’m not a lawyer. I’m a registrant that has been labeled a pedophile since the age of 14.
P.s. wrong year on article. No idea on the actual year.
March 1986 for the article seems to be correct. Kennedy quoted it, indirectly, in 2002, etc. https://narsol.org/2016/03/how-a-1986-psychology-today-article-continues-to-make-fools-of-supreme-court-justices/ Apparently Kennedy said something like “that up to 80 percent of untreated sexual offenders go on to commit more sexual crimes”.
Longo claims it was accurate at the time and it was anyway insufficiently specific to libel anyone in particular. As for suing a SC OTUS Justice for negligence in relying on inaccurate assertions… no way.
You can not libel a large group. If you say fratboys are rapists, that's not actionable. When you narrow the focus down to members of a single fraternity, then you have Elias v. Rolling Stone where the false rape accusation was actionable.
Well, are you a pedophile? What did you do that made you a sex offender?
You did catch that "since the age of 14" bit? I think we ought to assume that if he says "labeled" that he has sufficiently implied that his sexual contact wasn't with, say, a five-year-old.
https://reason.com/2017/09/14/im-appalled-says-source-of-pseudo-statis/
The article where I found the names
As previously noted, you're going to need to differentiate from Winter v. G.P. Putnam's Sons.
Yes, this was product liability, rather than directly libel. But at its core, it was about negligence, which applies both to libel and product liability. And the publisher was immunized from liability in these cases.
If you can't hold a publisher (Putnam) liable for publishing incorrect information about the potential harm in mushrooms in a book "The Encyclopedia of Mushrooms", it's going to be difficult to hold a publisher (GPT-Chat's or whatever AI) liable for negligent information published...whether its libel or negligent in another manner. Negligence is still negligence....whether it's libel or something else.
Armchair Lawyer: I don't understand -- you can hold a publisher liable for publishing incorrect information that's libelous, including in some situations (e.g., private concern speech or speech about public figures that causes provable harm) on a negligence theory. There are thousands of cases holding that. Putnam is the case that's the outlier, applicable to negligence. Why would it be a closer analogy to libel cases than libel cases are?
If a publisher reviews the information within the publication, then they may assume a reasonable duty to ensure its accuracy.
But if they don't review the information within the publication, they may not have a duty to independently investigate and verify the accuracy of the text published.
Most publishing houses do review the information, and thus assume some sort of duty. In this case however, due to how ChatGPT works, the publishers cannot review the information published (before it is actually published). If they can't actually review it, they cannot have a duty to independently investigate and verify the accuracy of what is published.
In this case, the operators of the LLMs are on notice that their products are prone to producing false output and that it can damage people's reputations. I admit my ignorance of the facts in the Putnam case. Was there any suggestion that the publisher knew the book might be dangerously wrong?
Footnote 9 of Winter expressly reserves that question, as well as noting that libel claims may be subject to other rules: "A stronger argument might be made by a plaintiff alleging libel or fraudulent, intentional, or malicious misrepresentation, but such is not contended in this case.
Thank you.
re: " publisher (GPT-Chat’s or whatever AI) "
The issue is more that they shouldn't be viewed as a "publisher" since chatbots or whatever AI aren't sentient entities capable of being held responsible in the way that publishers are. The misguided analogy seems to be to view the companies that create the AI as a "publisher": rather than merely a vendor of software tools people use at their own risk. They are like the vendors that provide a search engine that may happen to display content from a web page thats defamatory, or a browser that displays defamatory content.
But this isn't a "software tool" like Word or a search engine like google. ChatGPT is creating the content; the user of the tool and google aren't.
I don’t understand how one goes from the text of the First Amendment to deciding that in some narrow circumstances it only protects those deemed to be “responsible” and “prominent.”
That idea seems to be go against the principle that people aren’t supposed to be awarded titles of nobility.
If those deemed to be “responsible” (by whom?) and “prominent” (to whom?) have more First Amendment rights than others, then why can’t those people “responsible” and “prominent” people also have their votes in elections be given extra weight? After all, voting is no more of a fundamental right than free speech.
I find the “responsible” and “prominent” factors to be especially problematic because they go to the alleged status of the individual speaking, not their actual speech. I could understand saying that “responsible” speech about “prominent” individuals gets more protection, regardless of who says it. But the idea that A may say X, but B will be held liable for saying X seems wrong.
Let’s assume, that CNN is considered “responsible” and “prominent” for the sake of argument. Let’s assume that Fox News is considered “responsible” and “prominent” as well.
First things first. Many conservatives are going to say that CNN isn’t “responsible” and many liberals are going to say that Fox News isn’t “responsible.” Are constitutional rights really supposed to hinge on something so subjective? And doesn’t the assessment of whether a news organization is viewed as “responsible” depend to some degree on whether you tend to agree with their reporting? I get that “responsible” has a technical legal meaning derived from case law that may limit subjectivity to some degree, but I think we are going in completely the wrong direction because the First Amendment is most needed to protect unpopular speech. And humans have a tendency to view those who engage in unpopular speech as irresponsible.
OK. Anyway, “responsible” and “prominent” Fox News makes serious charges against a public figure.
And Bob, having listened to Fox News, tells Joe about those allegations at the local coffee shop. So, Bob can be sued for repeating what he heard on the news, even though Fox News cannot be sued? Ordinary people who are not “prominent” don’t have an equal right to discuss the news of the day???
I think I have a serious bone to pick with the authors of the Second Restatement on this as well as any court that has adopted similar reasoning. I believe that this line of thinking goes against our foundational constitutional principles. Person X isn’t supposed to have First Amendment rights that person Y lacks. I think such an interpretation of the law violates not only the 14th Amendment, but the Constitution as it was originally conceived since it in effects elevates a certain class of citizens to a sort of nobility.
“No taxation without representation” was about the idea that the colonialists couldn’t just be treated as second-class citizens with no say in government. Here, the idea that only those deemed to be “prominent” have full First Amendment rights turns everyone else into a second class citizen.
This makes me wonder if the people who pulled this “prominent” factor straight from their ass even thought once about the meaning of our American Revolution, the meaning of our Constitution with its prohibition on granting titles of nobility, and the fundamental values of our country when they made this shit up.
This "prominent" factor cannot be rightly considered law, even if it was a court that pulled this out of their ass. It just isn't legitimate. It doesn't have a foundation in our history or traditions. It is an usurpation of power, not an interpretation of law.
David Welker, you seem to have libel law backward. Public figures get a harsher exposure to potentially defamatory publications, not a more favored one.
Beyond that, you seem hopelessly confused about press freedom, and the liberties it guarantees alike to publishers of all kinds, whether prominent or not.
I am beginning to think what the nation will soon require is a rebirth of the notion of civics education, with at least a full year of classroom instruction devoted to explaining publishing and press freedom to would-be internet fans.
I do not generally agree with your takes on the First Amendment or your idiosyncratic views on publishing.
It looks like I may have misread the OP.
It seems that news organizations can report on allegations from "responsible" and "prominent" sources, not that the news organizations themselves have to be "responsible" and "prominent."
Well, that certainly changes things... Oops.
” For instance, “when a person repeats a slanderous charge, even though identifying the source or indicating it is merely a rumor, this constitutes republication and has the same effect as the original publication of the slander.”
So if I clip an article out of a newspaper and snailmail it to you, I’m libeling the person the article is about if it isn’t true?
This seems a bit extreme...
I like large models!
There's a lotta money to be sued out of the geniuses who invent dancing bears like this!
Must be tired of waiting for robot cars.
EV, I think you point to the need for a change in the laws. The whole concept of libel seems to me to include an implied mens rea component, or a negligence component, even if those are not required elements of proof.
The AI cases demonstrate for us that libel should include a mandatory mens rea or negligence component to succeed.
Tuttle, is your argument pro-libel? Does it ask for impunity to commit libel if certain means are followed?
You're begging the question, of course, probably in bad faith. AT is saying that what you insist on claiming is libel is NOT libel, not that libel should be permitted.
Why wouldn't mens rea be shown by continuing to provide the AI after being advised that it defames people? Mens rea doesn't require knowledge that an act constitutes a specific crime (it's a concept from criminal law rather than torts) but rather knowledge that the act is wrongful in some way.
However, libel against private figures explicitly uses a negligence standard rather than constructive knowledge of falsehood or anything else like mens rea. See, for example, item 3) at https://www.law.cornell.edu/wex/defamation .
Re: mens, I was thinking of the argument that the AI maker intended the libel as a malicious act.
Re negligence, what could the AI makers have done differently to avoid the risk of libel? To have negligence, you need a theory of something that should have been done that was not done.
I make a compass that in most circumstances points toward magnetic north. People depend on it for navigation. What about the circumstances where it doesn't point north, should I be liable for negligence. It seems to me that plaintiff would need to prove that I could have designed the compass some better way.
Just occurred to me, naturally we are focused on US libel law. LLMs are deployed on a worldwide network. What happens when one produces false and damaging output about someone who has access to UK or Australian libel law? Both are dramatically more tilted to the plaintiff than we are used to.
I suspect something that is confusing the issue is an implicit “anchoring bias” to the first entity they naturally try to blame for problematic content. When problematic content is seen that looks human-like: there is going to be an unconscious implicit impulse to wish to blame the entity that created that output. Of course rational people know the programs aren’t human or AGI entities that can be held responsible. So they then continue in the same direction and then look to blame the creators of those programs.
Its like progressives that read about “gun crime”, a faceless statistic that mentions guns and doesn’t focus on the often unknown people who held them, and instead they try to go after the creators of the tool.
The person who wields the tool is the one who chooses to do so and who then deals with the consequences. Yes: the outputs of this tool may not be very predictable to those who wield it at times: but that just means they need to factor that in to their choice to do so. They should validate the content against other sources before passing it on to anyone else or viewing it as a “fact” about reality rather than something that might be fiction, or no more a “fact” than the speculative “opinion” about events someone might generate out of thin air that isn’t claimed to be a “fact”.
RealityEngineer,
Your analogies are just inapt. If a gun or the ammo is badly designed or manufactured, it backfires and kills the user, the designer or manufacturer can be sued and held liable. This is because the gun or ammo itself is the source of the injury, not the user.
The issue with ChatGPT or any AI being held accountable for liable is: is the AI accountable for what happened. In many cases of "hallucination", it is clearly the AI that created the hallucination.
Did the user use it? Sure. Did they use it voluntarily? Sure. Might they be aware that sometimes something might go wrong? Sure.
But if the injury caused by the tool was due to poor design or manufacture, that liability can fall on the designer or engineer. This isn't news to engineers.
They aren't remotely inapt: you are missing the point. The analogy was to guns that function as designed. The analogy was to progressives who wish to go after gun manufacturers for guns the crimes their users commit. They wish to pretend the agency isn't with the users of those tools.
re: "But if the injury caused by the tool was due to poor design or manufacture, that liability can fall on the designer or engineer. This isn’t news to engineers."
It also isn't relevant since that isn't the issue at hand: except to those who apparently know nothing about AI and think there is "poor design" involved in these things rather than the reality that what they do is truly remarkable state of the art engineering.
They are intended to be useful: not perfect. It seems like some people have delusional ideas about what is possible or required for a useful tool. They might consider the reality that there are AIs from two big tech companies with vast resources, in addition to the AIs from others, that suffer from hallucinations. This isn't "poor design": its the reality that these are useful tools that merely don't do everything someone would ideally wish them to do, like be 100% accurate. They can do useful good even without being 100% accurate: if they aren't prevented to from people who decide they don't wish to allow people the freedom to decide what tool they wish to use. If you don't wish to use them fine: but others are ok using tools that have imperfections but are useful.
You fire up ChatGPT and say "Tell me about David Nieporent." ChatGPT falsely says, "David Nieporent tried to overthrow the government on January 6, 2021."
Look! I've been defamed. You weren't the one who defamed me; you didn't say anything false. So what "agency" do you have in this process?
Contrary to that silly claim: anyone reading this page is well aware they hallucinate and aren't guaranteed to generate facts. You are well aware that you were reading something fictional and hence not libel. You wish to pretend that people are too dense to grasp that it isn't a "fact", you are depriving them of the agency to say: sure give me things that may or may not be facts and I'll decide for myself.
Correct. That's not the way libel works.
That I know that some of the things Fox News (or CNN) reports aren't true does not mean that Fox News (or CNN) is free to report things that aren't true without being subject to liability. "Our viewers would like to decide for themselves whether we're telling the truth" is not a defense.
All of these posts seem to ignore that libel and defamation cases are routinely thrown out. Judges strain to find any reason to dismiss libel and defamation cases except when the judge views the defendant as some variety of the other.
So yeah, theoretically this might be a problem for these generators. Not in actual reality though, unless one of them gets somehow associated with the wrong kind of people.
Why is the allegedly "responsible" nature of the party making the statement of alleged fact necessary for (as opposed to merely a contributory factor to) the "newsworthiness" of the fact that the statement was made?
The fact that the NYT is mistakenly thought in some circles to be "responsible" merely increases the damage from its libels.
From the GPT4 Technical Report
In a comment above, EV said, "But say I had a device that I publicly touted as providing highly reliable answers" Do those words imply 100% reliable answers or only reasonably reliable answers? How does the word "highly" influence the meaning of that sentence? Has OpenAI publicly used the word "highly"? I've been through the entire OpenAI GPT4 Technical Report. All I see is that they say they went to great lengths to provide the best answers they could, and to minimize bad answers, but that some bad answers can not be avoided with today's technology. I see nothing equivalent to "highly reliable". It would be up to the tort plaintiff to prove that the defendant made such extravagant claims.
To their credit, both ChatGPT and Google Bard are great listeners.
Patient and apologetic too! It will just apologize over and over (and continue to make up stuff!!)
All the analogies to publishers and bookstores and guns and search engines are like feeling an elephant with your eyes closed. All of them capture something relevant but to make the best decisions we have to start by recognizing that LLMs are that great rarity, something that is actually new and different.
AI Law will be the next big thing. It creates all types of issues aside from libel. If AI is used by human resource departments to determine who gets hired, promoted, fired, or get bonuses, then can the company be sued for discrimination? What if AI is used by banks and lenders to determine who gets mortgages and at what rate. What if AI is used for criminal prosecutions and charging? Suppose hospitals and doctors use AI to diagnose and treat patients? Is there medical malpractice if AI gets it wrong? I've always been a believer that technology is great, but the output is only as good as the input. You put garbage inputs into a computer system, you will get garbage outputs. One only needs to look at computers that have driven planes into the ground because the computer was getting bad data.
Someone critiqued the idea of AI saving lives, regarding the issue of balancing pros vs. cons. This is a twitter thread about a dog rather than a human but illustrates the point of utility despite hallucinations:
https://twitter.com/peakcooper/status/1639716822680236032
"#GPT4 saved my dog's life.
After my dog got diagnosed with a tick-borne disease, the vet started her on the proper treatment, and despite a serious anemia, her condition seemed to be improving relatively well.
After a few days however, things took a turn for the worse 1/"
More importantly however: the use of AI at its current level of functionality leads to funding to improve it and apply it to more areas. The usage of the AI gives human feedback for the systems to be better trained and improved. The tech thats advanced may also be applicable to other areas like medical research, or the secondary aspects of bringing that research to market in terms of production, logistics, etc. Wharton prof Ethan Mollick has been touting the utility of AI in many areas of business, noting it passes a McKinsey test.
As I posted on the prior page:
https://jamesclear.com/all-models-are-wrong
“In 1976, a British statistician named George Box wrote the famous line, “All models are wrong, some are useful.”
His point was that we should focus more on whether something can be applied to everyday life in a useful manner rather than debating endlessly if an answer is correct in all cases.”
Even an AI that hallucinates can be useful.
Humans already spread much misinformation around the net like conspiracy theories or rumors that get distorted ala the telephone game, not realizing its misinformation.
I’d suggest the real societal concern regarding computer generated false content isn’t whats addressed by that draft, but computer amplification of that existing problem. The issue is false content more easily generated usually intentionally, whether better written fake news, or fake photos whose original creator is unknown and its spread by people thinking its true (or who received it from another human and didn’t question whether its real).
At least when people are using a chatbot: its a new phenomenon and from the start there is talk about hallucinations, and people using something new can be educated to the reality that what they are shown isn’t guaranteed to be fact (even if seemingly some wish to pretend thats impossible to do, that people can’t possibly be taught the difference between “fact”, “fiction” and “possibly fact or fiction”).
In contrast: spreading around rumors and pictures that are unsourced is something people are used to, for better or worse. Rational people grasp the problem with that and the need to validate those: but for whatever reason much of society doesn’t have that habit well enough ingrained and often lets confirmation bias dictate what they trust (partly due to the reality they don’t have time to question everything, so they just don’t bother questioning certain things that are low priority for them to evaluate its validity). The issue is: that requires people to change their habits to learn to adapt to the greater odds of something they see being false if its not attributed to some reliable source that backs it up.
It seems a more difficult practical concern than a new phenomena that arises purely when people got to a chatbot and they can be told it may be fiction: often they are used to generate fiction so that drives home that ability. Yes: those chatbots may aid the others who spread unsourced content, but they are different, even if related, societal problems.
In general the issue is that society needs to be educated to be more skeptical of content that isn’t backed by a reliable source: regardless of whether that content is something that was passed around the net, or from a chatbot. That skill is going to be crucial: and a chatbot is the easy case. Having them exist provides a learning opportunity for society to encourage people to better ingrain the lesson that even factual sounding information can be false. We are going to need them to apply that to the other information they get outside of chatbots.
The reality is there will be lots of misinformation. The lesson is that it should only be viewed as “real” if it is claimed as such by some human source that takes responsibility for claiming they believe it. If no human claims it: it should be taken as possibly fiction, and hence not libel since it shouldn’t be viewed as “fact”.