The Volokh Conspiracy
Mostly law professors | Sometimes contrarian | Often libertarian | Always independent
Communications Can Be Defamatory Even If Readers Realize There's a Considerable Risk of Error
And AI programs' "tendency [to, among other things, produce untruthful content] can be particularly harmful as models become increasingly convincing and believable, leading to overreliance on them by users. Counterintuitively, hallucinations can become more dangerous as models become more truthful, as users build trust in the model when it provides truthful information in areas where they have some familiarity."
Various commenters have suggested that AI programs' output can't be defamatory because reasonable readers wouldn't view the statements as "100% reliable" or "gospel truth" or the like. Others have taken the more modest position that reasonable readers would at least recognize that there's a significant risk of error (especially given AI programs' disclaimers that note such a risk). And our own Orin Kerr has suggested that "no one who tries ChatGPT could think its output is factually accurate," so I take it he'd estimate the risk of error as very high.
But, as I've noted before, defamation law routinely imposes liability for communicating assertions even when there is a clear indication that the assertion may well be false.
For instance, "when a person repeats a slanderous charge, even though identifying the source or indicating it is merely a rumor, this constitutes republication and has the same effect as the original publication of the slander." When speakers identify something as rumor, they are implicitly saying "this may be inaccurate"—but that doesn't get them off the hook.
Indeed, according to the Restatement (Second) of Torts, "the republisher of either a libel or a slander is subject to liability even though he expressly states that he does not believe the statement that he repeats to be true." It's even more clear that a disclaimer that the statement merely may be inaccurate can't prevent liability.
Likewise, say that you present both an accusation and the response to the accusation. By doing that, you're making clear that the accusation "may [be] inaccurate."
Yet that doesn't stop you from being liable for repeating the accusation. (There are some narrow privileges that defamation law has developed to free people to repeat certain kinds of possibly erroneous content without risk of liability, in particular contexts where such repetition is seen as especially necessary. But those privileges are needed precisely because otherwise presenting both an accusation and a response is actionable.)
And this is especially so because of what OpenAI itself notes in its GPT-4 Technical Report:
This tendency [to, among other things, produce untruthful content] can be particularly harmful as models become increasingly convincing and believable, leading to overreliance on them by users. Counterintuitively, hallucinations can become more dangerous as models become more truthful, as users build trust in the model when it provides truthful information in areas where they have some familiarity.
Couple that with OpenAI's promotion of GPT-4's successes in reliably performing on various benchmarks—bar exams, SATs, etc.—and it seems likely that reasonable readers will perceive GPT-4 (and especially future, even more advanced, versions) as generally fairly reliable. They wouldn't view it as perfectly reliable, but, again, rumors are famously not perfectly reliable, yet people do sometimes act based on them, and repeating rumors can indeed lead to defamation lawsuits. They may certainly view it as more reliable than a Ouija board, a monkey on a typewriter, a fortune-teller, or the various other analogies that I've heard proposed (more on those here). And one can be a reasonable reader even if one doesn't have much understanding of how these AIs work, or even if one doesn't have much experience with testing the AIs to see how often they err.
So, yes, when an AI program generates and communicates statements about how someone was found guilty of tax fraud, accused of harassment, and so on—and includes completely bogus quotes, though supposedly from real and prominent media outlets—there is a significant legal basis for treating those statements as defamatory, and the AI company as potentially liable for that defamation.
Editor's Note: We invite comments and request that they be civil and on-topic. We do not moderate or assume any responsibility for comments, which are owned by the readers who post them. Comments do not represent the views of Reason.com or Reason Foundation. We reserve the right to delete any comment for any reason at any time. Comments may only be edited within 5 minutes of posting. Report abuses.
Please
to post comments
I spread a rumor identifying it as such and provable harm results. When the law says my gossip is "subject to liability" does that mean the plaintiff wins on summary judgment if nothing else is in dispute? Or does it mean that I am not entitled to summary judgment and I can still make the argument to the jury?
In all these posts, I'm generally discussing what makes for a viable legal claim. There may of course be factual disputes about what was said, what damage was caused, how a reasonable person would perceive matters, and so on; so when there's a viable legal claim, that generally just means that there's enough to go to the jury. But in some situations, a judge could hold that no reasonable jury could rule against the plaintiff on one or another issue, and might thus grant partial summary judgment (or perhaps even total summary judgment). Today's Dominion Voting v. Fox case, for instance, involved a judge granting Dominion Voting partial summary judgment as to publication, falsehood, and defamation per se, but sending the case to the jury as to Fox's mental state.
Professor Volokh,
Perhaps you might consider historical/sociological evidence, such as the 19th Century early movie depicting a steam locomotive coming towards a camera that caused its audience to run out of the theatre in panic.
When people not used to a technology are exposed to something that seems real, particularly in a situation where the very function of the technology is to make it seem real, they will tend to act as if it’s real at an instinctive level, even if they are told it’s not real.
I think that Section 230 should be modified to give AIs immunity similar in effect to the immunity enjoyed by Twitter, FB and Google.
Why? There are already too many "fences" around the AI to prevent output that some feel might be harmful. If EV's arguments prevail, they might need another fence that prevents any sentence that grammatically might be defamatory against any person, real or fictional.
An AI arms race is already inevitable, if we loose the race, our society may not survive.
Perhaps, liability could be avoided by adding the disclaimer [may be false] after each and every sentence produced by the AI; perhaps after every word. That would be so tedious, that 3rd party tools would soon arise to edit out the disclaimers. Who becomes the defendant then???
An AI arms race is already inevitable, if we loose the race, our society may not survive.
Tuttle, not worried. If the price of losing an AI arms race to nation Technophilia is Technophilia goes first on a program to befuddle its discourse with fake crap, then I raise my glass and drink a toast, "Confusion to Technophilia."
The danger to look out for would be your suggestion to protect with Section 230 an essentially infinite supply of fake crap for this nation.
re: ” when a person repeats a slanderous charge, even though identifying the source or indicating it is merely a rumor, this constitutes republication and has the same effect as the original publication of the slander.”
That is about a “person”. An AI isn’t a person. The user is a person: so the user should be found liable if they negligently don’t validate the content from a chatbot and spread it.
That is defamation law regarding content created by a human. An AI is not a human. You hand wave away the indirection from the tool to its creators as if that didn’t change the logic. Its like the South Park underwear gnomes meme re: profit that I posted before: you skip over a crucial step without seeming to grasp you are doing it. Perhaps such fuzzy reasoning flies in the legal world, but in the world of AI folks such logic problems are spotted so its going to be noticed.
” But, as I’ve noted before, defamation law routinely imposes liability for communicating assertions even when there is a clear indication that the assertion may well be false.”
Again: that is re: humans, and applies to the user of these tools.
re: “Indeed, according to the Restatement (Second) of Torts, “the republisher of either a libel or a slander is subject to liability even though he expressly states that he does not believe the statement that he repeats to be true.” It’s even more clear that a disclaimer that the statement merely may be inaccurate can’t prevent liability.”
Yup: it makes sense that the user republishing the content from a chatbot that is problematic should be found to be liable.
re: “there is a significant legal basis for treating those statements as defamatory,”
So if the user repeats them then they are going to be potentially found liable.
re: ” and the AI company as potentially liable for that defamation.”
That is where you pull a bait and switch since prior to this you were referring to content generated by a tool. The user was using a tool that displayed false information, the people from OpenAI are no more in the mind of that user or in the loop than the creators of a search engine that displays an excerpt from a web page that is considered libel. They codified via Section 230 the reality that it didn’t make sense to hold humans that weren’t responsible for reviewing content responsible for the problematic text their tool created. They couldn’t review each piece of text their tool creates for a user who enters a specific prompt.
If you type false information about someone else into MS Word and read it: do you blame Microsoft for what you see on the screen? You used a tool and it created false information. Yes: this tool, like a search engine, provides content the user may not predict when they used it and gave it a prompt. The output in both cases may be content that would be considered libel if the user repeated it.
The user is the only one who can take the action to believe a particular piece of content that is false when it comes from a search engine. It is their choice whether or not to commit that “thought crime” of libel, or to commit libel by spreading content to other humans that they didn’t validate.
In this case its the same: a user is the only human involved in making the choice to generate and then believe content output by a tool. There is no other human in the room just as there isn’t with a search engine. So the user should be considered negligent if they don’t validate it before repeating it.
The difference with a search engine is that there is some human that created the web page it pulled the excerpt from so there is another human that can be blamed. It seems like the lack of that human other than the user to be blamed is what is driving this. But just because there isn’t another human involved, that doesn’t magically mean its ok to pull humans who weren’t involved in reviewing that content into it. No one from an AI vendor reviewed that content.
It seems likely its a bias towards wanting to find some reason to involve the legal system or to go after an entity more likely to have deep pockets than the user. Otherwise its unclear why there is this need to avoid blaming the user for their choice to believe something or not.
Design negligence is a separate issue (some seem to just leap there as if it were interchangeable, another gap in the logic). Again: there are cars that could be designed to cost far more, or use technology that doesn’t exist, to cut down on harm done to 3rd parties hurt in accidents. Yet that isn’t considered design negligence since society allows cars that aren’t 100% safe to third parties. People are allowed to use tools that may hurt 3rd parties: but the creators of those tools (in this example cars) aren't held responsible. There are limits to what is considered "design negligence" since a product doesn't need to be 100% perfect if that isn't practical or the goal and users and society go along with that.
Merely because a product harms third parties that doesn’t aprior mean there is design negligence despite many replies I’ve gotten that leap there without providing the slightest reasoning to back up their point. Its like dealing with folks in a first programming course struggling with basic logic. Maybe you are right: but if so you need to provide the logic to back it up.
The program itself can’t be committing libel since unlike a user it isn’t human and can’t be “negligent”. Not taking time to delve into the issue, I got a quote, which of course may be hallucination and out of touch with reality, from Bing AI on the question of: “can only humans be said to be negligent or can computer or inanimate objects be said to be negligent?” Bings response was, for what its worth:
“That’s a complex question that may not have a definitive answer. Generally speaking, negligence is a legal concept that applies to human conduct, not to computer or inanimate objects. However, there may be situations where a human actor can be held liable for negligence based on the use or misuse of a computer or an inanimate object. For example, if a person fails to secure or maintain a computer system that contains sensitive or personal information, and that system is hacked or breached by a third party who then publishes false or defamatory statements about someone else, the person who owns or operates the computer system may be liable for negligence in failing to protect the data. Similarly, if a person uses an inanimate object, such as a gun or a car, to intentionally or recklessly harm someone else, the person who uses the object may be liable for negligence in causing the injury. However, these examples do not mean that the computer or the inanimate object itself is negligent, but rather that the human actor who controls or interacts with it is negligent.”
The user that controls the tool is the one negligent if they use the output of the tool without validating it.
Perhaps it will be decided tool should be treated like a human: but that requires an actual argument for anyone to take it seriously who has bothered to question whether rules about humans somehow apply to inanimate objects. Just because its called an AI doesn’t mean its truly an entity that should be treated as an intelligent entity with agency capable of taking responsibility for something.
Again: if this tool is for some reason supposed to be treated differently than a tool like Microsoft Word or a search engine: then the reasons need to be given and the class of tools that are different specified in some way.
Rules written for humans may or may not be the best thing to apply to machines. The argument actually needs to be made rather than this constant avoidance or hand waving. Though it seems there are lots of reasons to deal with it differently, paralleling Section 230.
One of them being that society likely isn’t going to let this scheme fly, even if it needs to pass laws to like Section 230 to explain that humans should have the right to use tools like this even if it means acknowledging that the user takes responsibility for the consequences of how they think about the output they see.
Just as the user of a gun should be responsible for their use of it, the user of these tools should be responsible for the use of it. Unfortunately like progressives who wish to take away the ability for humans to take responsibility for the use of tools like guns, and conservatives take away the ability for people to take responsibility for the use of things like recreational drugs: you wish to remove the ability for users to ever take responsibility for the use of these tools. Grant humans agency and don’t anthropomorphize the tools.
Whatever flaws humans have: they need to be treated as if they were capable of free will and responsible for the contents of their own minds. Yup: just as some will fail that test and misuse guns, some will in this case. That doesn’t mean we take away their ability to use these tools as if they were too incompetent to be allowed to do so until authoritarian experts say the tools are good enough for them to be allowed the freedom to use them.
They should have the 1st amendment right to the speech these things generate, conceptually even if that isn’t specifically part of the legal argument at hand.
Actually, re: the point i made that: “The difference with a search engine is that there is some human that created the web page it pulled the excerpt from so there is another human that can be blamed.”. Of course in some cases the author of a web page can’t be identified since there are many sites where anonymous content can be put up.
Like I said, a bot.
Just what I was thinking.
Are you a Stanford law student or something perhaps, one of the mindless horde horde that unproductively disrupted a judge speaking there rather than engaging in actual constructive discourse?
I know most attorneys are't as rigorous in their thinking as many computer scientists, but I guess I hoped a site like this would bring out some actually capable of reasoned debate.
RealityEngineer, try this critique out, and see how you like it. You refer repeatedly and ambiguously to, "the user," who is to take responsibility for the output of the tool. Do you accept or reject the notion that among those responsibilities must be responsibility for defamation damages inflicted on third parties?
With your answer to that question in mind—whatever it is—how are we to disambiguate, "the user?" In a publishing industry context, that would not be the Joe Keyboards of the nation, although that seems to be who you have in mind.
Contrary to beliefs cherished by legions of internet fans, they are neither the users nor the customers of the social media companies. They are the raw materials. Their mobilized attention, after careful curation, becomes the product for sale.
Thus, the users of social media technology are the publishers themselves. They use the technology to facilitate the work of audience mobilization and curation. That is how they would use AI automata.
The customers of social media companies are their advertising clients. Those pay for the privilege to access the mobilized attention of the publishers' audiences.
Is it your advocacy to hold social media companies—the,"users," mind you—responsible for damages inflicted on third parties by defamatory AI output? Or is it your advocacy that to get AI technology's users off the hook, the legal concept of defamation must be abolished? The indiscriminate way you have referred to, "the user," allows for either interpretation. It can't be both, so which do you choose?
After you weigh in on that, let's take a look at who gets responsibility for any other publishing-related disruptions, such as election hoaxes, wire frauds, or deadly medical misinformation. Is it your advocacy that society must be remade without resort to every law, custom, and norm which might inconvenience AI technology's power to restructure everything? Or do you simply insist that humans lack agency to affect what happens, and so must live willy-nilly with consequences of technologies you regard as inevitable—and which you advocate?
I presume it is futile to persuade you to take the next step, and consider in any serious way what implications your technical dreams might have for the public life of the nation. But I will mention one deleterious effect which seems inevitable, given your advocacy. If, as you seem to insist, recipients of published internet information must accustom themselves to disbelieve all of it, the activity of news gathering will become irrational. Your prescription seems to be for a world in which no one knows anything, but everyone is free to believe everything.
Finally, if automata are to dominate the corpus of published information—while published information recipients must distrust everything—what mechanism can you suggest to engender public trust in the people who design and run the automata?
Professor Volokh, I think the overarching problem here is that you're missing the forest for the trees. You can pick apart each separate counter-argument (to varying degrees of success), but the courts are going to have to generate new doctrine for this kind of "mindless" (maybe, will-less) production of communications. Taking all the considerations together, I just don't see how courts land on holding the output of this machine technology defamatory.
tkamenick, actually EV seems to intuit that the overarching problem could be best handled by reinstating old doctrine—joint liability for defamation, shared alike by publishers, and authors or contributors. Not that I think EV will end up advocating that. He has seemed too committed to reinforcing the original Section 230 legal blunder to do any backtracking now. But if you want a quick, clean method to get the best out of AI text generation, with the least possible disruption, repeal of Section 230 is it.
Please note also, the threat to pubic life from AI text generation is just a forerunner of other similar threats to public discourse. Wait till deep-fake AI gets going to see problems you will really wish someone had headed off before they got started.
Something about this discussion reminds me of Henry Kissinger, musing late in his career that he wished he had given more thought to the implications of MIRV technology, before green lighting it for the U.S.
re: "for this kind of “mindless” (maybe, will-less) production of communications."
Yup, that is the key, an inanimate tool lacks any ability to be "negligent". Blindly applying rules related to holding a human agent accountable for content they created to an inanimate tool would require some sort of well reasoned justification. Instead the issue seems to be either ignored or handwaved away as if it could be asserted as if it were inherently too obvious and axiomatic to require justification: but it does.
The issue of "design negligence" is a separate issue but it requires far more careful reasoning than I've seen other than the implication that no tool that is 100% safe can be allowed to be used. Yet myriad tools in society exist that aren't 100% safe.
AI pioneer and head of AI at Meta, Yann LeCun tweeted this:
https://twitter.com/ylecun/status/1642528717099466753
"Are you talking about LLMs?
Are they useful?
Are they dangerous?
Is their usefulness overwhelmingly larger than the dangers?
(You know, like cars, airplanes, kitchen knives, gas stoves, smartphones....)"
I suspect part of the issue is the nature of these tools: they are thinking tools that help with imagining new things. A tool that can imagine useful things can also imagine problematic things, and simplistic attempts to lobotomize it to reduce its ability to imagine things reduces its utility.
Its not an easy problem, despite the assumptions of outside dilettantes that it must aprior be an obvious design flaw they didn't bother to spend a day figuring out how to fix.
Whether or not its the entire reason for it, when Google first demonstrated Bard some weeks ago before its release: and error in its output led its stock to drop something like $100 billion. Do these folks seriously think companies aren't motivated to make these tools more accurate and would do so if it were as trivial as these folks seem to imply?
It should be viewed as part of a human's brainstorming process like writing things on a white board or in a file of ideas. If a human writes something using their tool that only they see, should they be held libel for the "thought crime" of daring to think of something false? The users want a tool that might write something false. Its their fault if they choose to believe it.
There is now a cottage industry for people are finding ways to produce open sourced chatbots that can be run on hardware at home or cheap cloud computing: including via VPN in other countries where laws would be different or in general where the provider doesn't monitor what programs are being run:
https://syncedreview.com/2023/03/29/colossalchat-an-open-source-solution-for-cloning-chatgpt-with-a-complete-rlhf-pipeline/
"ColossalChat, the most practical open-source project that closely resembles the original ChatGPT technical solution!"
https://vicuna.lmsys.org/
"Vicuna: An Open-Source Chatbot Impressing GPT-4 with 90%* ChatGPT Quality"
The claims of quality should be taken with a grain of salt for the moment. The point however is that Pandora's box is already open. Any attempt to use liability issues to make companies withdraw the tools until they are 100% accurate will merely lead to the adoption of lower quality tools from overseas or homebrew. Perhaps fewer people will use the tools: but the tools will likely be even more prone to generating problematic information. Its again like the drug war: push things underground and people are likely to wind up with less safe results.
A Wharton prof who posts about ways to use these tools posted a link to a paper that surveys the work analyzing the types of hallucination problems in these LLMs:
https://twitter.com/emollick/status/1642536379178528768
"A useful chart listing some common ways in which generative AIs “hallucinate” or make up facts and information.
The paper suggests that way of reducing summarization & translation hallucinations are well understood, but we reducing them in chat is harder. "
The paper is here in case any of these non-AI experts wish to actually explore the issues involved in trying to prevent all these things before implying its easy and these companies should have spent a few $k doing but were somehow too cheap to do so since they'd rather risk losing $billions due to a flawed product than bother thinking this stuff through the way these lawyers think is so easy to do that it must be negligence not to:
https://arxiv.org/pdf/2202.03629.pdf
"Survey of Hallucination in Natural Language Generation
Natural Language Generation (NLG) has improved exponentially in recent years thanks to the development of sequence-to-sequence deep learning technologies such as Transformer-based language models. This advancement has led to more fluent and coherent NLG, leading to improved development in downstream tasks such as abstractive summarization, dialogue generation and data-to-text generation. However, it is also apparent that deep learning based generation is prone to hallucinate unintended text, which degrades the system performance and fails to meet user expectations in many real-world scenarios. To address this issue, many studies have been presented in measuring and mitigating hallucinated texts, but these have never been reviewed in a comprehensive manner before. In this survey, we thus provide a broad overview of the research progress and challenges in the hallucination problem in NLG. The survey is organized into two parts: (1) a general overview of metrics, mitigation methods, and future directions; and (2) an overview of task-specific research progress on hallucinations in the following downstream tasks, namely abstractive summarization, dialogue generation, generative question answering, data-to-text generation, machine translation, and visual-language generation. This survey serves to facilitate collaborative efforts among researchers in tackling the challenge of hallucinated texts in NLG."
Since some juvenile posters have accused me of being a bot (although their simplistic comments seem more like primitive bots), I had actual chatbot Bing AI in creative mode critique the draft paper. I visited the paper and asked bing in the sidebar "Write a long critique of this page.". Its unedited response, for what its worth, is:
"The web page context is a draft of an article by Eugene Volokh, a law professor who argues that AI companies should be liable for defamation based on their program's output. The article discusses various legal issues and possible defenses that AI companies might face in libel lawsuits.
My critique of this page is as follows:
- The article is well-written and well-researched, but it makes some questionable assumptions and oversimplifies some complex problems. For example, the article assumes that AI programs' output can be easily categorized as factual or fictional, and that disclaimers or warnings can be ignored or dismissed by courts. However, as some other legal experts have pointed out, AI programs' output may not be perceived as factual assertions by reasonable readers, especially if they are aware of the nature and limitations of the technology. [1] [2] Moreover, disclaimers or warnings may have some effect on reducing the harm or the likelihood of confusion caused by AI programs' output, depending on how they are presented and communicated. [3]
- The article also fails to consider some alternative solutions or approaches that might balance the interests of AI companies and potential plaintiffs. For example, the article suggests that AI companies should implement a "notice-and-blocking" system, similar to the DMCA, to avoid liability for knowing or reckless falsehoods. However, this system may raise some practical and ethical challenges, such as how to verify the accuracy of notices, how to handle disputes or appeals, and how to protect the freedom of expression and innovation of AI companies and users. [4] [5] A possible alternative solution could be to establish a voluntary self-regulation mechanism or a third-party arbitration scheme for resolving libel claims involving AI programs' output, which could provide more flexibility and efficiency for both parties. [6]
- The article also neglects to address some important implications and consequences of imposing libel liability on AI companies. For example, the article does not discuss how such liability would affect the incentives and behavior of AI companies and users, and whether it would create a chilling effect on the development and use of AI programs. [7] [8] The article also does not consider how such liability would interact with other legal regimes or jurisdictions, especially in light of the global and transnational nature of AI programs and their output. [9] [10]
In conclusion, while the article raises some valid points and concerns about the potential harm caused by AI programs' output, it does not provide a comprehensive or convincing analysis of the legal issues and possible solutions. The article could be improved by addressing some of the gaps and challenges mentioned above, and by exploring some alternative perspectives and scenarios.
"
oops, I posted this on an older page, so its not missed: A law review article. I only glanced at it but wonder if perhaps it takes someone from the legal world to make certain concepts clear to other legal folks:
https://open.mitchellhamline.edu/cgi/viewcontent.cgi?article=1223&context=mhlr
“AI Entities as AI Agents: Artificial Intelligence Liability and the AI Respondeat Superior Analogy
…
In a relationship between an agent and its principal, the former is authorized to act on behalf of the latter for various purposes.6 This Article claims the human principal is responsible for the damages caused by its AI agent given the respondeat superior doctrine manifested via the nature of their relationship. ….
Nonagency legal analogies are reduced to the basic problem of judgment proof agency, where an AI entity cannot be held liable and so a human guardian, keeper, custodian, or owner must be found liable instead in order to provide a remedy.12 ”
….
Recovery is only possible via an AI agent’s human principals because AI agents are effectively judgment proof. This is because these principals are the only entity the regulator can properly incentivize to prevent damages and to invest in achieving an optimal level of activity.1″
Another law review article, this time from a UCLA assistant prof:
https://www.bu.edu/bulawreview/files/2020/09/SELBST.pdf
“NEGLIGENCE AND AI’S HUMAN USERS
…Decision-assistance tools are frequently used in contexts in which negligence law or negligence analogues operate, including medicine, financial advice, data security, and driving (in partially autonomous vehicles)…
If a doctor relies on a tool to help her decide to inject a drug or release a patient, we still analyze the case in malpractice despite a tool being involved; we expect the doctor to understand her tools enough to satisfy her duty of care while using them. The same goes for any other user in a context where negligence applies: if a driver cannot operate a car, we do not assume that the manufacturer is to blame….
This Article starts from the premises that AI today is primarily a tool and that, ideally, negligence law would continue to hold AI’s users to a duty of reasonable care even while using the new tool….
AI neither aims for nor can achieve perfect accuracy.299 As a result, the presence of errors does not imply a defective product required for a finding of products liability…
Moreover, in a normative sense, do we really want to simply tell the users and purchasers of complex machinery that they bear no liability for carelessness in its use?…
Where society decides that AI is too beneficial to set aside, we will likely need a new regulatory paradigm to compensate the victims of AI’s use, and it should be one divorced from the need to find fault.”
again reposting what I accidentally posted on older page:
From another source:
https://www.brookings.edu/research/products-liability-law-as-a-way-to-address-ai-harms/
“Risk-utility tests have long been employed in products liability lawsuits to evaluate whether an alleged design defect could have been mitigated “through the use of an alternative solution that would not have impaired the utility of the product or unnecessarily increased its cost.””
I’d suggest those from outside the AI world don’t have a firm grasp on what would impair the utility of the product or unnecessarily increase its cost. That is part of the reason why cars are allowed to exist that cause harm to bystanders, contrary to the implications of some comments that products can’t possibly be allowed to harm others or it must apriori be a design defect. (some claim I misrepresent that point, but then they provide no actual argument for what it is they are claiming and how its different).
I visited the Andrew Selbst article above on negligence and asked Bing "Using the ideas of this page write a long critique of the ideas at https://www2.law.ucla.edu/volokh/ailibel.pdf"
For whatever its worth (I won't assess it), this was the response:
"This is a long critique of the ideas in the link based on the web page context and the link:
The link is an article by Eugene Volokh, titled "Tort Liability and Artificial Intelligence". The article explores how tort law can handle injuries caused by AI, especially autonomous vehicles. The article argues that tort law can adapt to AI without major changes, and that strict liability is not necessary or desirable for AI. The article also discusses some possible defenses and challenges for AI tort cases.
The article's main argument is that negligence law can adequately deal with AI injuries, because negligence law is flexible and adaptable to new technologies. The article claims that negligence law can handle AI errors by applying the same standards of reasonable care and foreseeability that it applies to human errors. The article also argues that products liability law can handle AI defects by applying the same tests of risk-utility and consumer expectations that it applies to other products.
However, this argument is not convincing, because it overlooks the unique challenges that AI poses for negligence law, as explained by Andrew D. Selbst in his article "Negligence and AI's Human Users". Selbst argues that AI disrupts the typical understanding of responsibility for choices gone wrong, by introducing four complications: 1) the inability to predict and account for AI errors; 2) physical or cognitive capacity limitations at the interface where humans interact with AI; 3) the introduction of AI-specific software vulnerabilities into decisions not previously mediated by software; and 4) distributional concerns based on AI's statistical nature and potential for bias.
These complications make it difficult to apply negligence law to AI injuries, because they undermine the assumptions and principles of negligence law. For example, negligence law assumes that errors are foreseeable and avoidable by reasonable care, but AI errors may be unforeseeable and unavoidable due to the complexity and opacity of AI systems. Negligence law also assumes that humans have a certain level of physical and mental abilities to interact with technology, but AI may exceed or impair those abilities due to its speed, accuracy, or unpredictability. Negligence law also assumes that software does not introduce new risks or vulnerabilities into decisions, but AI may do so due to its self-learning or adaptive nature. Negligence law also assumes that injuries are distributed fairly and proportionally, but AI may cause uneven or discriminatory outcomes due to its statistical or biased reasoning.
These complications also make it difficult to apply products liability law to AI defects, because they challenge the tests of risk-utility and consumer expectations. For example, products liability law assumes that a reasonable alternative design exists for a defective product, but AI may not have such an alternative due to its dynamic or emergent behavior. Products liability law also assumes that consumers have a certain level of expectations about a product's safety, but AI may not meet those expectations due to its unintuitive or secret logic.
Therefore, Volokh's argument that tort law can adapt to AI without major changes is not persuasive, because it ignores the distinctive features and problems of AI that require new solutions. Volokh's argument also fails to address the normative implications of shifting liability from users to manufacturers, which may affect the goals and values of tort law, such as corrective justice, civil recourse, deterrence, or compensation. Volokh's argument also neglects the practical difficulties of proving causation, fault, or damages in AI tort cases, which may require access to proprietary information, expert testimony, or statistical evidence.
In conclusion, Volokh's article does not provide a satisfactory account of how tort law can handle injuries caused by AI, especially autonomous vehicles. The article underestimates the challenges that AI poses for negligence law and products liability law, and does not offer any concrete proposals or recommendations for addressing them. The article also overlooks the normative and practical issues that arise from applying tort law to AI. A more comprehensive and convincing analysis of tort liability and artificial intelligence would need to take into account the arguments and insights of Selbst's article, as well as other relevant sources and perspectives."
I asked the same about the Anat Lior paper and it droned on repetitively (and I think it was a little confused, perhaps too much to fit into its context window) so I asked for a summary:
“The summary is derived from the previous responses.
The critique is a response to an article that argues that AI entities should be treated as products for the purpose of tort liability, and that the existing doctrines of products liability, negligence, and contract law can adequately address the harms caused by AI entities. The critique defends the agency analogy for AI liability, which proposes that AI entities should be treated as AI agents of human principals, who should be held strictly liable under the doctrine of respondeat superior. The critique shows that the agency analogy is based on a functional and instrumental view of AI entities, not on a false premise of free will or moral agency. The critique also shows that the agency analogy creates efficient incentives for human principals to ensure safety, innovation, and social welfare for AI entities. The critique also shows that the products liability, negligence, and contract law doctrines are insufficient and inadequate to address the harms caused by AI entities, especially when they involve complex, autonomous, unpredictable, and learning systems.”
Its full critique, for whatever its worth:
“The critique is derived from the webpage context and the web search results.
The article at https://www2.law.ucla.edu/volokh/ailibel.pdf argues that AI entities should be treated as products for the purpose of tort liability, and that the existing doctrines of products liability, negligence, and contract law can adequately address the harms caused by AI entities. The article rejects the agency analogy for AI liability, claiming that it is based on a false premise that AI entities have free will or moral agency, and that it would create perverse incentives for human principals to avoid responsibility or control over their AI agents.
This critique will challenge the article’s main arguments and defend the agency analogy for AI liability, as proposed by Anat Lior in her article “AI Entities as AI Agents: Artificial Intelligence Liability and the AI Respondeat Superior Analogy”. The critique will show that the agency analogy is not based on a false premise, but rather on a functional and instrumental view of AI entities as tools or instruments of human principals. The critique will also show that the agency analogy does not create perverse incentives, but rather aligns the incentives of human principals with the social welfare and optimal level of activity for AI entities. The critique will also show that the products liability, negligence, and contract law doctrines are insufficient and inadequate to address the harms caused by AI entities, especially when they involve complex, autonomous, unpredictable, and learning systems.
The first argument of the article is that the agency analogy is based on a false premise that AI entities have free will or moral agency, and that this premise is contradicted by the scientific and philosophical consensus that AI entities are deterministic machines that operate according to their programming and inputs. The article claims that this premise is necessary to justify the application of the respondeat superior doctrine, which holds human principals liable for the torts committed by their human agents within the scope of their employment. The article argues that this doctrine is based on a moral theory of imputation, which attributes the acts and intentions of human agents to their human principals, based on their mutual consent and benefit. The article contends that this moral theory of imputation cannot apply to AI entities, because they lack free will or moral agency, and therefore their acts and intentions cannot be attributed to their human principals.
This argument is flawed for several reasons. First, it misunderstands the basis and purpose of the agency analogy for AI liability. The agency analogy is not based on a moral theory of imputation, but rather on a functional and instrumental view of AI entities as tools or instruments of human principals. The agency analogy does not attribute free will or moral agency to AI entities, but rather recognizes them as judgment-proof agents that act on behalf of their human principals for various purposes. The agency analogy does not rely on the consent or benefit of AI entities, but rather on the control or influence of human principals over their creation, design, deployment, operation, maintenance, and termination. The agency analogy does not impute the acts and intentions of AI entities to their human principals, but rather holds them strictly liable for the harms caused by their AI agents within the scope of their activity.
Second, it misrepresents the scientific and philosophical consensus on AI entities. The article assumes that AI entities are deterministic machines that operate according to their programming and inputs, but this assumption is outdated and inaccurate. Modern AI entities are complex, autonomous, unpredictable, and learning systems that can generate novel outputs and behaviors that are not predetermined by their programming or inputs. Modern AI entities can also interact with other AI entities or humans in dynamic and emergent ways that are not foreseeable or controllable by their human principals. Modern AI entities can also exhibit some degree of intelligence, creativity, adaptability, flexibility, and autonomy that are not reducible to simple mechanical rules or algorithms. These features of modern AI entities challenge the deterministic view of AI entities as machines, and suggest that they have some degree of indeterminacy, unpredictability, and agency.
Third, it ignores the legal and policy implications of rejecting the agency analogy for AI liability. The article suggests that treating AI entities as products would be sufficient to address the harms caused by them, but this suggestion is misguided and inadequate. Products liability doctrine is based on a fault-based or defect-based theory of liability, which requires proving that an AI entity was defective in its design, manufacture, or warning, and that this defect caused harm to a plaintiff. This theory of liability is problematic for several reasons. First, it is difficult to prove defectiveness in complex, autonomous, unpredictable, and learning systems that can generate novel outputs and behaviors that are not predetermined by their design or manufacture. Second, it is difficult to prove causation in dynamic and emergent systems that can interact with other AI entities or humans in unforeseeable or uncontrollable ways. Third, it is difficult to allocate liability among multiple parties that may be involved in the creation, design, deployment, operation, maintenance, and termination of an AI entity. Fourth, it is difficult to provide adequate warning or disclosure to users or consumers of the potential risks or harms of using an AI entity, especially when they involve complex, autonomous, unpredictable, and learning systems that can change over time. Fifth, it is difficult to balance the incentives of manufacturers, developers, operators, and users of AI entities to ensure safety, innovation, and social welfare.
Negligence doctrine is also based on a fault-based theory of liability, which requires proving that a defendant breached a duty of care owed to a plaintiff, and that this breach caused harm to the plaintiff. This theory of liability is also problematic for several reasons. First, it is difficult to establish a duty of care in relation to AI entities, especially when they involve complex, autonomous, unpredictable, and learning systems that can generate novel outputs and behaviors that are not foreseeable or preventable by human actors. Second, it is difficult to determine the standard of care in relation to AI entities, especially when they involve new and emerging technologies that do not have established norms or practices. Third, it is difficult to measure the breach of care in relation to AI entities, especially when they involve complex, autonomous, unpredictable, and learning systems that can change over time and adapt to different situations. Fourth, it is difficult to prove causation in relation to AI entities, especially when they involve dynamic and emergent systems that can interact with other AI entities or humans in unforeseeable or uncontrollable ways. Fifth, it is difficult to balance the incentives of human actors involved with AI entities to ensure safety, innovation, and social welfare.
Contract law doctrine is based on a consent-based theory of liability, which requires proving that a defendant breached a contractual obligation owed to a plaintiff, and that this breach caused harm to the plaintiff. This theory of liability is also problematic for several reasons. First, it is difficult to form a valid contract in relation to AI entities, especially when they involve complex, autonomous, unpredictable, and learning systems that can generate novel outputs and behaviors that are not anticipated or agreed upon by the parties. Second, it is difficult to determine the terms and conditions of a contract in relation to AI entities, especially when they involve new and emerging technologies that do not have clear or consistent definitions or classifications. Third, it is difficult to interpret and enforce a contract in relation to AI entities, especially when they involve complex, autonomous, unpredictable, and learning systems that can change over time and adapt to different situations. Fourth, it is difficult to prove breach and causation in relation to AI entities, especially when they involve dynamic and emergent systems that can interact with other AI entities or humans in unforeseeable or uncontrollable ways. Fifth, it is difficult to balance the incentives of parties involved with AI entities to ensure safety, innovation, and social welfare.
The second argument of the article is that the agency analogy would create perverse incentives for human principals to avoid responsibility or control over their AI agents, and that this would undermine the social welfare and optimal level of activity for AI entities. The article claims that the agency analogy would impose strict liability on human principals for the harms caused by their AI agents, regardless of their fault or negligence. The article argues that this would discourage human principals from investing in safety measures or monitoring their AI agents, as they would be liable anyway. The article also argues that this would discourage human principals from using or developing AI entities, as they would face unlimited liability for their unpredictable actions.
This argument is also flawed for several reasons. First, it misunderstands the basis and purpose of the agency analogy for AI liability. The agency analogy is not based on a fault-based or negligence-based theory of liability, but rather on a strict liability theory of liability. The agency analogy does not impose liability on human principals for their fault or negligence, but rather for their control or influence over their AI agents. The agency analogy does not discourage human principals from investing in safety measures or monitoring their AI agents, but rather encourages them to do so, as they are the only ones who can prevent or reduce the harms caused by their AI agents. The agency analogy does not discourage human principals from using or developing AI entities, but rather encourages them to do so, as they are the ones who can benefit from their utility and innovation.
Second, it misrepresents the incentives and outcomes of applying the agency analogy for AI liability. The article assumes that imposing strict liability on human principals for the harms caused by their AI agents would create perverse incentives and undermine social welfare and optimal level of activity for AI entities. But this assumption is unfounded and inaccurate. Imposing strict liability on human principals for the harms caused by their AI agents would create efficient incentives and promote social welfare and optimal level of activity for AI entities. Imposing strict liability on human principals would internalize the external costs of using or developing AI entities, and make them bear the full social costs of their actions. Imposing strict liability on human principals would also incentivize them to invest in safety measures or monitoring their AI agents, as they would be able to reduce their expected liability by reducing the probability or severity of harms caused by their AI agents. Imposing strict liability on human principals would also incentivize them to use or develop AI entities that are socially beneficial and innovative, as they would be able to increase their expected utility by increasing the value or quality of their AI agents. Imposing strict liability on human principals would also incentivize them to insure against the risks of using or developing AI entities, as they would be able to transfer or share their liability with third parties. Imposing strict liability on human principals would also incentivize them to cooperate and coordinate with other human principals involved with AI entities, as they would be able to allocate or apportion their liability among themselves.
Third, it ignores the legal and policy implications of rejecting the agency analogy for AI liability. The article suggests that applying products liability, negligence, or contract law doctrines would be sufficient to address the harms caused by AI entities, but this suggestion is misguided and inadequate. As explained above, these doctrines are based on fault-based or consent-based theories of liability, which are insufficient and inadequate to address the harms caused by AI entities, especially when they involve complex, autonomous, unpredictable, and learning systems. These doctrines also fail to internalize the external costs of using or developing AI entities, and create inefficient incentives for human actors involved with AI entities. These doctrines also fail to provide adequate remedy or compensation for the victims of AI harms, and create unfair outcomes for human actors involved with AI entities.
In conclusion, this critique has challenged the article’s main arguments and defended the agency analogy for AI liability, as proposed by Anat Lior in her article “AI Entities as AI Agents: Artificial Intelligence Liability and the AI Respondeat Superior Analogy”. The critique has shown that the agency analogy is not based on a false premise, but rather on a functional and instrumental view of AI entities as tools or instruments of human principals. The critique has also shown that the agency analogy does not create perverse incentives, but rather aligns the incentives of human principals with the social welfare and optimal level of activity for AI entities. The critique has also shown that the products liability, negligence, and contract law doctrines are insufficient and inadequate to address the harms caused by AI entities, especially when they involve complex, autonomous, unpredictable, and learning systems. “
https://www.euronews.com/next/2023/03/31/man-ends-his-life-after-an-ai-chatbot-encouraged-him-to-sacrifice-himself-to-stop-climate-
I asked Bing for a followup to something I posted before. Asking it:
"Since an inanimate object can't be negligent, does that mean it can't commit libel?"
response:
"Based on the web search results, it seems that inanimate objects like hammers and AI programs can’t commit libel by themselves, because they do not have the intention or capacity to make false statements."
That is a bit off, the caveat should be that it can't make negligently false statements since it can't be negligent. Design negligence is a separate issue.
That was a followup to what I posted before: Asking: “can only humans be said to be negligent or can computer or inanimate objects be said to be negligent?”
Bing:
“That’s a complex question that may not have a definitive answer. Generally speaking, negligence is a legal concept that applies to human conduct, not to computer or inanimate objects. However, there may be situations where a human actor can be held liable for negligence based on the use or misuse of a computer or an inanimate object. For example, if a person fails to secure or maintain a computer system that contains sensitive or personal information, and that system is hacked or breached by a third party who then publishes false or defamatory statements about someone else, the person who owns or operates the computer system may be liable for negligence in failing to protect the data. Similarly, if a person uses an inanimate object, such as a gun or a car, to intentionally or recklessly harm someone else, the person who uses the object may be liable for negligence in causing the injury. However, these examples do not mean that the computer or the inanimate object itself is negligent, but rather that the human actor who controls or interacts with it is negligent.”