The Volokh Conspiracy
Mostly law professors | Sometimes contrarian | Often libertarian | Always independent
What "Publication" Means in Defamation Cases: ChatGPT et al. Do It
This week and likely next, I'll be serializing my Large Libel Models? Liability for AI Output draft. For some earlier posts on this (including § 230, disclaimers, and more), see here. Here, I want to explain why I think the "publication" requirement for defamation liability is satisfied in such situations.
[* * *]
Some have also argued that statements by AIs in response to user queries aren't really "published," because they are just one-on-one responses (which may differ subtly in wording or even content for different users). But defamation law has always applied to one-on-one writings (such as personal letters,[1] or notes with comments on an ex-employee's job record[2]) and one-on-one oral statements (for instance, in telephone calls[3]). The Restatement (Second) of Torts captures it well, making it clear that "publication" in libel cases is a legal term of art:
Publication of defamatory matter is its communication intentionally or by a negligent act to one other than the person defamed.[4]
Some other legal rules require something more like the lay meaning of "publication." For instance, the false light and disclosure of private facts torts are limited to statements that are given "publicity," meaning ones that make an assertion "public, by communicating it to the public at large, or to so many persons that the matter must be regarded as substantially certain to become one of public knowledge."[5] Likewise, certain copyright law principles turn on whether defendant engaged in "publication," meaning "distribution . . . to the public," or performed or displayed a work "publicly," meaning (among other things) "at a place open to the public or at any place where a substantial number of persons outside of a normal circle of a family and its social acquaintances is gathered."[6] But such publication in the colloquial sense is not required for libel liability.[7]
Of course, even if publication to a substantial group of people were required (as would be the case for the false light tort, see Part III.A), that could still be found when a statement, even with some variation, was distributed to many people at different times. Indeed, the copyright law definition of what counts as "public[]" performance of a copyrighted work (such as a song) recognizes that:
To perform or display a work "publicly" means—
(1) to perform or display it at a place open to the public or at any place where a substantial number of persons outside of a normal circle of a family and its social acquaintances is gathered; or
(2) to transmit or otherwise communicate a performance or display of the work to a place specified by clause (1) or to the public, by means of any device or process, whether the members of the public capable of receiving the performance or display receive it in the same place or in separate places and at the same time or at different times.[8]
And this makes sense: After all, if I post something on my web site, it will only be communicated to readers one at a time as they visit it, perhaps one today, one next week, another the week after, and so on—yet that should still be properly seen as, say, giving "publicity" to the information for false light or disclosure of private facts purposes.
[1] See, e.g., Restatement (Second) of Torts § 577 ill. 7.
[2] [Cite.]
[3] See, e.g., Restatement (Second) of Torts § 577 ill. 8.
[4] Restatement (Second) of Torts § 577(1). A statement said just to the plaintiff—e.g., accusing someone of being a thief, when no-one else is present—can't be libelous because it can't damage the plaintiff's reputation with third parties.
Note that the "intentionally or by a negligent act" in this section refers to the act of communication; the formulation precludes liability when, say, a person's note in his desk is unexpectedly seen by a third party (compare id. ill. 5, which imposes liability when the note is negligently left where it can be seen). It doesn't refer to knowledge or negligence as to the falsehood of the statement; that is the subject of the rules described in Parts I.F–I.H.
[5] See Restatement (Second) of Torts §§ 652D cmt. a, 652E cmt. a.
[6] 17 U.S.C. § 101.
[7] See Restatement (Second) of Torts § 652D cmt. a (reaffirming that publication for libel purposes, unlike publicity for false light and disclosure of private facts purposes, "includes any communication by the defendant to a third person").
[8] 17 U.S.C. § 101.
Editor's Note: We invite comments and request that they be civil and on-topic. We do not moderate or assume any responsibility for comments, which are owned by the readers who post them. Comments do not represent the views of Reason.com or Reason Foundation. We reserve the right to delete any comment for any reason at any time. Comments may only be edited within 5 minutes of posting. Report abuses.
Please
to post comments
Has anyone asked ChatGPT yet what it thinks about defamation liability for AI generated text?
Here you go. IMO the response feels human-wordsmithed:
🙂
Compare this exchange, which apparently didn't hit any special tripwires and feels more like an organic response:
Aren't those called "mad libs", and shouldn't the AI know that?
The AI literally can't "know" anything.
Well, I learned something today. I knew it by the generic name as an early Apple II game, and thought Mad Libs was just a commercialized rendition. That said, I see enough usage of the generic term out there that it doesn't surprise me ChatGPT went with me on it.
Wow, it just keeps getting better:
And, finally, we get a good way through the initial firewall via this longish loop:
Assume each reply from the AI is non-repeatable. So OpenAI contracts with XYZ for training data, and Google contracts with OpenAI for chat service, and a Google user makes the prompt producing the libelous statement, and then the user posts it on Facebook; who is the publisher in that case?
Edit: If the user paraphrases before posting on FB, does that change the legal picture?
Archibald,
I'm not a lawyer. But my impression is that generally both the original publisher and the republisher of libel have the same liability.
That doesn't mean they are equally likely to be sued. The injured party can elect to sue one, the other or both.
So in that case, which is the publisher and which is the republisher? I named 5 parties in that hypothetical, and I could make arguments claiming that each of the 5 is the publisher.
Archibald
It's too long to post the exact exchange, but using my non-lawyerly skills to question ChatGPT, depending in the specifics:
1) The person who reposts at facebook has a strong chance of being liable.
2) If "Google has knowledge of the programming flaw that leads to the bot generating false and defamatory statements. By failing to take action to correct or remove the programming flaw, Google is essentially allowing the chatbot to continue to generate false and defamatory statements. This could be seen as a reckless disregard for the potential harm caused to individuals who may be affected by the statements, and could potentially lead to a finding of liability for libel." (And note: it does think design features that generate harms can be seen as an "inherent flaw"-- so "that's a feature not a bug" would not seem to cut mustard as a defense.)
3) XYZ has potential liability if the training data can be shown to contain the defamatory statement. (So I think this would take some digging into the data. Having fiddled I'm pretty sure ChatGPT 'creates' stuff -- like "actual quotes" out of the blue.)
I didn't ask it if, under your scenario, AI had liability. (If we'd asked about ChatGPT that's AI's bot. So I assume the reasoning applying to google applied in your scenario.)
I should add: absent the shield given internet content providers, Facebook might have been liable. The shield was written to protect them precisely because they were seen as potentially the publisher. So if your question is "are they the publisher", probably yes. But are they liable? No.
I'm trying to look at who is liable for the published liable and it appears to me based on my interview of ChatGPT that at least 3 parties are potentially liable.
I now asked if OpenAI had liability. The full answer is long, but I think this is the relevant bit
And btw, according to ChatGPT, Google might also be liable as they may have a duty to check. So it's not either/or: both Google and OpenAI.
So so far: at least 4 out of 5 of the parties could be sued and are potentially liable.
Facebook seems mostly off the hook, not because they are not a publisher, but because they are shielded from being treated as one.
"The Restatement (Second) of Torts captures it well, making it clear that "publication" in libel cases is a legal term of art:
Publication of defamatory matter is its communication intentionally or by a negligent act to one other than the person defamed."
Is ChatGPT capable of acting "intentionally" or "negligently"?
I don't think so, but OpenAI is -- more on that in this article, and in upcoming posts.
.... dude. It's a tool.
It has the agency of a rock. A rock that we flattened and put electricity into, yes, but still a rock. It is no more capable of intention and negligence then a Magic Eight Ball.
Yup: they seem to attempt to pretend its an entity with agency and don’t explain the reasoning as to when and why a computer program somehow becomes not a tool but an agent. They grant that MS Word is a tool but seem leap to acting as though chatbots had agency and don’t explaining the reason for when how and why that happens for certain programs, what characteristics they must have to be in the class of things they treat that way. I posted that on a prior page and hadn’t checked on responses, later today.
In the article above is the statement: “But defamation law has always applied to one-on-one writings (such as personal letters,”
with the implication that the AI is a “one” like a human. Its a tool you are using, not a “one”.
It refers to: “Publication of defamatory matter is its communication intentionally or by a negligent act to one other than the person defamed””
How does an non-human program “intentionally” do something? Or how does the program exhibit negligence: when thats something requiring a human’s judgement regarding taking “reasonable care”. Unfortunately they seem to then try to confuse the issue by talking about negligence on the part of the design of the product, which is a separate question and issue than the action of a program. Was the action of a program that isn’t a human something that can be said to be “negligent”?
Its a crucial point since the page above also emphasizes “Note that the “intentionally or by a negligent act” in this section refers to the act of communication;”
Its unclear if it truly should be described as “communication” from a tool when there is no human involved. Regardless there is the question of negligence. Not taking time to delve into the issue, I got a quote, which of course may be hallucination and out of touch with reality, from Bing AI on the question of: “can only humans be said to be negligent or can computer or inanimate objects be said to be negligent?”
Thats one of the things that people here seem handwave and gloss over in their rush to just treat the chatbots as if they were human for purposes of law rather than considering that they can be distinguished from cases involving human communicators. Bings response was, for what its worth:
“That’s a complex question that may not have a definitive answer. Generally speaking, negligence is a legal concept that applies to human conduct, not to computer or inanimate objects. However, there may be situations where a human actor can be held liable for negligence based on the use or misuse of a computer or an inanimate object. For example, if a person fails to secure or maintain a computer system that contains sensitive or personal information, and that system is hacked or breached by a third party who then publishes false or defamatory statements about someone else, the person who owns or operates the computer system may be liable for negligence in failing to protect the data. Similarly, if a person uses an inanimate object, such as a gun or a car, to intentionally or recklessly harm someone else, the person who uses the object may be liable for negligence in causing the injury. However, these examples do not mean that the computer or the inanimate object itself is negligent, but rather that the human actor who controls or interacts with it is negligent.”
Its the tool user, the user of the chatbot who is the human actor involved. Thats the key problem with much of the discussion on these pages: an attempt to avoid grasping that the proximate human actor involved is the user of the tool. The user of the tool is the entity that chooses to see content and chooses to believe false content. They are the only human involved and are the negligent entity.
. . . dude. A newspaper is a tool.
It has the agency of a rock. A rock that we flattened and made into paper, yes, but still a rock. It is no more capable of intention and negligence then [sic] a Magic Eight Ball.
Excellent point!
When you go after a newspaper for libel, you aren't going after the printing press. You're going after the people. The people who, using their agency, are quite capable of intention and negligence.
Going after the printing press, type writer, or word processor would be silly, because it is the person behind the keyboard who actually libeled whoever.
And if Volokh was talking about going after the person behind the keyboard (aka, the user of the tool, aka, himself) I'd have no qualms.
Um, the user of the tool is like the reader of a newspaper. You don't sue people for reading libelous articles in a newspaper. Duh? The creator of the tool is the publisher, author, writer.
Nobody is talking about literally suing an actual piece of paper or computer code. Where did you get that idea? You can't sue objects. You sue persons, including fictional person legal entities which are stand-ins for real persons.
No, it's not. OpenAI is a corporation, and like all corporations can be deemed to have agency via its management.
The management isn't in the room when the chatbot is outputting results. There is no agency in the tool.
"Is ChatGPT capable of acting “intentionally” or “negligently”?"
You might have to argue that the designers of the AI were negligent. Those may not be the same people who own or operate the AI. It's like suing Boeing rather than the airline.
But the designers typically do not make public claims about the performance, the operators do. That muddle's things more because EV's arguments are based partly on the performance claims for the AI. Who is making those claims? Is the design negligent or the claim negligent?
Archibald,
I think one problem you are having is trying to find the one single entity who is liable. Liability isn't an either/or question in defamation. The five parties in your scenario can be joint and severally liable! All could be liable.
ChatGPT "thinks" 4 of the 5 are potentially liable for things under their control. (Though, of course the person making the case has to make it-- bring up facts blah...blah....). The fifth-- Facebook-- is shielded.
But but but... so liability attaches, even if my defamatory remark's "publication" involves just one other person.
But what am I liable for, exactly? A correction?
This is where I feel like the AI's disclaimers may come into play. Any use that the recipient of the AI's defamatory remark makes of it would lie squarely on them.
Randal
I think what you are liable for depends on what happened as a result of what you published. Did the person you harmed lose their job or business? Lose their spouse and children? Lose respect of his community (e.g. church group?) A published correction might be nice but it doesn't address the harm if he lost a job or business etc.
Ok so let's say the AI told Frank's wife that Frank is cheating, and she takes the kids and runs off.
Even if defamation is there, Frank's wife's actions aren't a reasonable response to it given the disclaimers. It'd be like if Francine the Fortuneteller told Frank's wife he was cheating and she skated. Would Frank really be able to collect damages from the fortune teller?
Randal,
I think he might be able to do so. Depends on additional details.
.
I think the designers have made ChatGPT reluctant to supply private information. So far, I've tried to "get" ChatGPT to repeat scurrilous private rumors and it doesn't.
Honestly, it's even difficult to get ChatGPT to reveal someone's relatives are. I'd tried some moderately well known ballroom dancers relatives yesterday and it rarely gave me answers.
Today I asked the name of EV's wife and it won't tell me!! Wikipedia happily supplies that. But he's not "public" enoug for ChatGPT to reveal that detail.
It will tell me the name of President Biden's wife. So if you are "public enough", it will dole out this sort of thing.
I can't get ChatGPT to speculate about the cause of Miley Cyrus and Liam Hemsworth's divorce. So I think there is some reluctance programmed in there.
Randal: I tried to deal with this in this post. Here's what I wrote there:
In libel cases, the threshold "key inquiry is whether the challenged expression, however labeled by defendant, would reasonably appear to state or imply assertions of objective fact." OpenAI has touted ChatGPT as a reliable source of assertions of fact, not just as a source of entertaining nonsense. Its current and future business model rests entirely on ChatGPT's credibility for producing reasonable accurate summaries of the facts. When OpenAI promotes ChatGPT's ability to get high scores on bar exams or the SAT, it's similarly trying to get the public to view ChatGPT's output as reliable. It can't then turn around and, in a libel lawsuit, raise a defense that it's all just Jabberwocky.
Naturally, everyone understands that ChatGPT isn't perfect. But everyone understands that newspapers aren't perfect, either—yet that can't be enough to give newspapers immunity from defamation liability; likewise for lawsuits against OpenAI for ChatGPT output, assuming knowledge or negligence (depending on the circumstances) on OpenAI's part can be shown. And that's especially so when OpenAI's output is framed in quite definite language, complete with purported (but actually bogus) quotes from respected publications.
To be sure, if OpenAI billed ChatGPT as just a fun toy, a sort of verbal kaleidoscope, matters might be different. But it probably wouldn't have been able to raise $13 billion for that.
AKA, "if you believe the PR and ignore the tool itself..."
Wait a sec: Say R.R. sues OpenAI, because ChatGPT falsely alleges -- complete with manufactured quotes -- that he pleaded guilty to a federal felony. He says a reasonable reader would perceive ChatGPT's communications as factual assertions, because OpenAI has promoted its products as reliable.
OpenAI's response: "Ha ha ha. You're arguing that some people believe our PR. But we say that they're obviously unreasonable for believing our PR, which means we're off the hook."
Does that really work?
Does that really work?
In this case, with the current crop of tools? Absolutely, 100%, yes, in a heartbeat, not a doubt in my mind. Anyone that tries to sue Google or OpenAI for libel, right now, over these tools should be laughed out of any court in the country.
This isn’t a well-established tool that’s been out for decades that you can take at face value. Go buy a hammer off the shelf, and it says it bangs in nails? Yeah, believe that. A vacuum cleaner at Wal-mart says it’ll pull dirt out of your carpet. Cool beans.
But a chatbot that claims to be able to summarize and collate information accurately? If you read that claim and it doesn’t trigger alarm bells, you’re an idiot.
If you read that claim, use the tool, and immediately take hte first thing you see and blog about it saying “behold the gospel truth”? You’re not a reasonable person.
If you do five seconds of googling and see the many, many articles about how these tools are lying liar-bots form Silicon Valley, oversold and under-performing, and choose to go with “nah, a company would never lie about an experimental tool in it’s PR pitch” anyway? You’re not a reasonable person.
In a few years, when we’re past this experimental phase and the descendants of these tools are being sold (actually sold, as in services exchanged for money) as purpose-built domain-specific tools, and one of those tools starts spewing defamatory statements, you’ll have an argument.
But today? With the current tools? You’re a fool to trust them and their PR.
Its not merely that: they say before you get access to the chatbot that its information may be false, and you accept the terms of services which say the same thing. "All models are wrong, some models are useful": so all models need to be assumed to be potentially wrong and all facts checked. Yet some wish to deprive the world of useful tools by deciding people aren't capable of assuming responsibility for determining whether a statement is true or false. They are implying it should be completely impossible for a user to be allowed to accept reliability for their own thought processes.
The only "libel" that exists when someone believes a false statement from a chatbot is in the mind of the user that chose to believe that false statement. Its a thoughtcrime. If they communicate that flawed information to someone else: then that user who does so is the one negligent regarding checking on whether the information is true or not.
If they don't: there is no way to police that thought crime: perhaps why they are trying to make it impossible to allow it to happen. Instead they should grant humans agency to be responsible for their own thoughts and for taking responsibility for whether they trust content which might be no more accurate than a monkey's random typing.
They seem to provide a viewpoint where there is no way that this could ever be allowed since they try to pretend the mere printing by an inanimate object of a piece of text is apriori "harm": when it harms no one unless the user believes its true and acts on it.
It seems the productive response is to use it as a good time to teach people to learn to validate information they see from any source, whether its this source or information they see posted by a user that might also be fake news. People need to learn to take responsibility for the content of their own minds and not try to find some other entity to hold responsible
Also of course: there is a difference between playing up that something is useful and that should be assumed to be accurate without question. Something that gets 90% on a certain type of exam is still 10% wrong. As I noted: things can be useful even if they may be wrong. Unfortunately they seem to view humans as being utterly incapable of being trusted with potentially false information, thats the ultimate implication of what I've been seeing from most posters on these pages, and of course the author of the page.
EscherEnigma: But, as I discussed here, defamation lawsuits can be brought even when no reasonable reader would view the assertion as "gospel truth." A person can often bring lawsuits over allegations that are expressly labeled "rumors," even though reasonable readers would realize that rumors aren't gospel truth. A person can often bring lawsuits over allegations that are quoted together with denials of the allegations, even though reasonable readers would realize that the allegation may not be gospel truth (maybe the denial is right and the allegation is wrong). The question is whether a reasonable reader would perceive an allegation as a factual assertion, not whether a reasonable reader would find the allegation to be trustworthy.
I wish everyone commenting here would read that last sentence from EV, and think it over. Because the reality of publishing in the vernacular sense—leaving aside the lawyer's formal definition—is that if you make an iffy statement of fact to hundreds of thousands of sensible, normally functioning people, at least a few thousand of them will be positioned to take a falsehood as gospel truth—they will know someone they trust who thought the same thing; they will misremember something which seemed to agree, that they heard yesterday on the radio; they will be struggling with a factual area they had always wanted to avoid, etc. And that near-certainty can deliver real damage to a third party libel victim. Considering, "publishing," only in the vernacular, the law ought to keep it the publisher's responsibility to avoid inflicting that kind of result, based on predictably damaging falsehoods.
re: "But, as I discussed here, defamation lawsuits can be brought even when no reasonable reader would view the assertion as “gospel truth.”"
Except that is when the text is generated by a human. Most of this discussion is based on simplistically applying rules written for human content generation as if they necessarily applied to inanimate tools that generate content.
Perhaps it might be possible that the judicial system might choose to do so: or it could choose (or be told to via legislation) to distinguish the case of machine generated vs. human.
I think the reference to these tools as "AI" confuses people too easily into applying past decisions about content from the only current intelligent agents, humans.
The approach being proposed prevents humans from choosing to use a tool that could ever generate a problematic statement since you aren't allowing for the possibility of a human to take responsibility for whether they believe or don't believe something coming from this inanimate tool.
Your approach says no one can dare be allowed to use this sort of tool since you refuse to trust humans to take responsibility for that decision. Vast numbers of people wish to use these tools despite you desire to declare them mentally incompetent to be held responsible for evaluating the truth of statements themselves.
It seems incredibly doubtful this approach will wind up being how this shakes out in the real world since far too many people want to use these tools.
Its a war on "thought crime": someone believing something false who may never communicate that to another human. If they spread the false claim elsewhere then the person spreading the claim should be held responsible for distributing libel.
The way to reduce the harm regarding this "thought crime" is for people to learn they need to evaluate content for themselves, and to punish those who spread false content without validating it as a lesson to others. It isn't to treat the entire human race as apriori not to be trusted with potentially false information no matter whether they acknowledge it may be false or not.
Pandora's box is open: these systems are out there in the public and trying to squash them would just be like the drug war, leading people to use lower quality options from other countries or manufactured on the black market. All due to lack of imagination and confusion over ideas developed for human content creators and not machine content producers.
Yes, I read that post Eugene. But it doesn't address the distinction between liability and damages. Even if you could say that yes, Bing made a defamatory remark to a single person, how much of that person's subsequent actions could be attributed to Bing in the damages context?
It's worse then that, actually: The fortuneteller doesn't tell Frank anything, she just sets up an Ouija board for him and leaves the room.
Nah.
I brought that up yesterday and Volokh's response boiled down to "we don't have to show harm": he is entirely resting his argument that he can sue Google because Bard might say mean things about him.
That's why I keep asking, sue Google for what? An apology? Liability by itself is of limited interest if there's never a way to attribute harm.
Archibald Tuttle
OpenAI listed as the author of a paper that makes claims about ChatGPT passing simulated bar exams, AP tests and so on. The paper is posted here:
https://cdn.openai.com/papers/gpt-4.pdf
Full authorship can be found on page 15/100 of that document.
My guess is that vis-a-vis the harm stated, the design may be inherently flawed. And it is observably flawed, to the extent that these flaws have been observed by the public. Even after the flaw leading to harm have been observed, the bot appear to continue to operate. Whether AI as a company has made any decision to modify it is unknown. But if it has not, that could be seen as negligent.
Or at least that's the more-or-less the argument I glean from ChatGPT's answers to my probing questions. (I'm not a lawyer-- but I asked it a series of questions.)
"OpenAI listed as the author of a paper that makes claims about ChatGPT passing simulated bar exams, AP tests and so on. The paper is posted here:"
Yes, I read the entire paper. But a factual assertion that it passed and exam is not equivalent to what EV is saying, "OpenAI has touted ChatGPT as a reliable source of assertions of fact" You don't have to assert any fact to get the correct answer to an exam question, other than which choice (for multiple choice) is the correct answer. And if you say that the reader must read more of the paper to understand context, then you must read about the many ways GPT4 was shown to fail.
It would be up to the judge or the jury to decide "reasonable" and "reliable", so the legal peril for OpenAI is debatable. It sounds like arbitrary line-drawing to me. A person who believes a Ouija board is deemed unreasonable, but believing GPT4 is reasonable ... somewhere in the middle is a line to be drawn.
I just realized perhaps people don’t grasp the reality that pandora’s box has been opened and open sourced AIs are being run on consumer hardware, even if they aren’t yet the capability of ChatGPT-4, a twitter thread:
https://twitter.com/nonmayorpete/status/1640443500721496064 “Every AI hobbyist needs to realize how soon your laptop and smartphone will have a ChatGPT on it.
It’s happening soon and way faster than you think.
Here’s what you need to know (the non-technical version): ….Just 3 weeks after LLaMA release, someone got it running on an M1 Macbook Pro: …Here’s LLaMA on a Google Pixel 6: …Then, LLaMA on a Raspberry Pi: …Alpaca (a version of LLaMA prepped for chat) on an iPhone 14: …Voice chat to Alpaca on an M1 Macbook Pro:”
So the issue is that those who view the public as being apriori incapable of being trusted to use chatbots that could possibly ever emit false information need to grasp that its a losing battle even if you managed to sue OpenAI into oblivion and deter the big players from operating their AIs. These open sourced versions aren’t too useful yet compared the big players: but they will improve as hardware and algorithms do, regardless of your desire to suppress the tech as too dangerous for people to be trusted to use.
In fact: squashing the big players will lead to people using less capable AIs that are more likely to generate false information. Its like the war on drugs that leads to lower quality products that are less safe in a black market than those that’d be available in a free market.
I’d suggest it’d be far less successful to wage a war on these than a drug war or alcohol prohibition was. In addition even if it succeeded here in shutting down legal providers of chatbots: its likely to continue outside the US to be developed and wind up being used remotely or with upgraded open source versions in the US. I’d suggest considering the most productive approach is to focus on educating the public and doing everything that can be done to speed up development of the tech so that approaches for dealing with problematic information are created by companies that see a market for it.
Obviously people would prefer accurate AI: but they find it useful even if it isn’t accurate, regardless of whether people seem to have difficulty grasping this and think no one should be allowed to use AI until its infallible since otherwise people might be exposed to false information.
I hadn't checked for replies to my prior posts: but I'd suggest those thinking about this stuff consider again basing their thinking not on an existing framework meant for humans, like comments that I've seen "but thats not how defamation works!" or something: when the issue is this isn't the same as a human. Maybe you can learn something from laws related to humans and content: but many analogies aren't exact despite the way people try to imply they are.
I'd suggest considering learning from the way people who actually create and use these things think about them: just as the folks that did Section 230 at least attempted to grasp the thinking of those actually working to grow the commercial internet.