The Volokh Conspiracy
Mostly law professors | Sometimes contrarian | Often libertarian | Always independent
Other Torts (Besides Libel) and Liability for AI Companies
Last week and this, I've been serializing my Large Libel Models? Liability for AI Output draft. For some earlier posts on this (including § 230, disclaimers, publication, and more), see here; one particular significant point is at Communications Can Be Defamatory Even If Readers Realize There's a Considerable Risk of Error. Today, I close with some thoughts on how my analysis, which has focused on libel, might be generalizable to other torts.
[* * *]
Generally speaking, false light tort claims should likely be treated the same way as defamation claims. To be sure, the distinctive feature of the false light tort is that it provides for a remedy when false statements about a person are not defamatory, but are merely distressing to that person (in a way the reasonable person test would recognize). Perhaps that sort of harm can't justify a chilling effect on AI companies, even if harm to reputation can. Indeed, this may be part of the reason why not all states recognize the false light tort.
Nonetheless, if platforms are already required to deal with false material—especially outright spurious quotes—through a notice-and-blocking procedure, or through a mandatory quote-checking mechanism, then adapting this to false light claims should likely produce little extra chilling effect on AIs' valuable design features.
[B.] Disclosure of Private Facts
An LLM is unlikely to produce information that constitutes tortious disclosure of private facts. Private information about people covered by the tort—for instance, about sexual or medical details that had not been made public—is unlikely to appear in the LLM's training data, which is largely based on publicly available sources. And if the LLM's algorithms come up with false information, then that's not disclosure of private facts.
Nonetheless, it's possible that an LLM's algorithm will accidentally produce accurate factual claims about a person's private life. ChatGPT appears to include code that prevents it from reporting on the most common forms of private information, such as sexual or medical history, even when that information has been publicized and is thus not tortious; but not all LLMs will include such constraints.
In principle, a notice-and-blocking remedy should be available here as well. Because the disclosure of private facts generally requires intentional behavior, negligence liability should generally be foreclosed.
[C.] False Statements That Are Likely to Lead to Injury
What if an LLM outputs information that people are likely to misuse in ways that harm persons or property—for instance, inaccurate medical information?[1]
Current law is unclear about when falsehoods are actionable on this theory. The Ninth Circuit rejected a products liability and negligence claim against the publisher of a mushroom encyclopedia that allegedly "contained erroneous and misleading information concerning the identification of the most deadly species of mushrooms,"[2] partly for First Amendment reasons.[3] But there is little other caselaw on the subject. And the Ninth Circuit decision left open the possibility of liability in a case alleging "fraudulent, intentional, or malicious misrepresentation."[4]
Here too the model discussed for libel may make sense. If there is liability for knowingly false statements that are likely to lead to injury, an AI company might be liable when it receives actual notice that its program is producing false factual information, but refuses to block that information. Again, imagine that the program is producing what purports to be an actual quote from a reputable medical source, but is actually made up by the algorithm. Such information may seem especially credible, which may make it especially dangerous; and it should be relatively easy for the AI company to add code that blocks the distribution of this spurious quote once it has received notice about the quote.
Likewise, if there is liability on a negligent design theory, for instance for negligently failing to add code that will check quotes and block the distribution of made-up quotes, that might make sense for all quotes.
[D.] Accurate Statements That Are Likely to Facilitate Crime by Some Readers
Sometimes an AI program might communicate accurate information that some readers can use for criminal purposes. This might include information about how one can build bombs, pick locks, bypass copyright protection measures, and the like. And it might include information that identifies particular people who have done things that may target them for retaliation by some readers.
Whether such "crime-facilitating" speech is constitutionally protected from criminal and civil liability is a difficult and unresolved question, which I tried to deal with in a separate article.[5] But, again, if there ends up being liability for knowingly distributing some such speech (possible) or negligently distributing it (unlikely, for reasons I discuss in that article), the analysis given above should apply there.
If, however, legal liability is limited to purposeful distribution of crime-facilitating speech, as some laws and proposals provide,[6] then the company would be immune from such liability, unless the employees responsible for the software were actually deliberately seeking to promote such crimes through the use of their software.
[1] See Jane Bambauer, Negligence Liability and Autonomous Speech Systems: Some Thoughts About Duty, 3 J. Free Speech L. __ (2023).
[2] Winter v. G.P. Putnam's Sons, 938 F.2d 1033, 1034 (9th Cir. 1991).
[3] Id. at 1037.
[4] Id. at 1037 n.9.
[5] Eugene Volokh, Crime-Facilitating Speech, 57 Stan. L. Rev. 1095 (2005).
[6] See id. at 1182–85.
Editor's Note: We invite comments and request that they be civil and on-topic. We do not moderate or assume any responsibility for comments, which are owned by the readers who post them. Comments do not represent the views of Reason.com or Reason Foundation. We reserve the right to delete any comment for any reason at any time. Comments may only be edited within 5 minutes of posting. Report abuses.
Please
to post comments
Would willful blindness substitute for actual intent to distribute actionable material?
Was the Trump "indictment" AI generated?
It actually would make more sense. A thoughtful piece:
https://reason.com/2023/04/04/trumps-new-york-indictment-turns-one-hush-payment-into-34-felonies/
What about THEFT?
If they fabricate Reuter's quotes, they have stolen it's brand.
Every word of that is gibberish, and the punctuation isn't any better.
A better OP to raise this question would be hard to find: Why must prevention of public harms arising from non-defamation publications depend on a mechanism so inflexible, so unresponsive, and so remote to access as legal claims?
When the subject matter is publishing, and the remedy is law, there will always be frustrating mismatch built in. Many if not most harms from malignant publishing must be prevented before they happen, or not prevented at all; too often no adequate post-hoc recompense is possible. Broken eggs cannot be unscrambled.
What can the law do about that? Not much, if we insist, as we ought to do, that prior constraints on publishing by government should almost never be allowed. All should acknowledge it is too dangerous to put power to decide what is truth, and what is not truth, into government hands.
Nevertheless, there remains a flexible, effective, and government-free remedy for problems like election hoaxes, public health lies, scurrilous-but-true allegations stemming from neighbor disputes, Nigerian prince scams, quack cancer remedies, allegations of pizzeria basement child exploitation, and a host of other publications which may not be defamatory, but which do cause damage to public life. That remedy is private editing prior to publication.
As liability conundrums mount in response to AI publishing challenges, focus on private editing ought to increase. Private editing prior to publication offers superior capacity to accomplish both needed goals—public protection from gratuitous damage, and private protection to enable AI companies to pursue development without being sued out of business. No other suggestion so far offered makes as much sense, or can accomplish so much at so little added cost.
I know, you internet utopians are thinking, "Oh, the humanity!" Private editing = end of the world, right? Domination by elites!
Why do you think that?
I can think of only one sensible answer, and it implicates bad motives. There is an obviously widespread constituency (and of course a changeable one) in favor of opportunistic lying. If lies can wrest political power from opponents, why shouldn't that be counted an expressive freedom advantage to be claimed as a right to lie?
But what sensible person thinks to understand that provides justification to put the power of publishing at its service? What public purpose can be advanced by handing to opportunistic liars an enormous force multiplier like publishing? No purpose can be served that way, which cannot be better served in others ways.
Private editing practiced near-universally would erect an imposing barrier to publishing purposeful lies to mobilize political factions. Private editing tends (but only tends) to encourage public policy founded on truth, or at least on attempts to match government efforts to practical realities eventually accounted for and responded to politically.
The great blessing of private editing as a remedy—in addition to its matchless power to crowd government out of the censorship business—is that private editing practiced widely enough becomes diffuse, and only marginally effective. Absent over-concentrated and giantistic media, private editing lacks any capacity to clamp down on content. And that is an invaluable safeguard.
Diffuse, various, and mutually competitive private media respond naturally to furnish outlet to every species of opinion which can command even a sliver of marketability. If some heterodox opinion arises and spreads a bit, private media to serve it promptly spring up, and then thrive in proportion to the popularity of that kind of advocacy. If regional differences define different classes of opinion, properly diverse private media respond flexibly within regions.
If private media are not giantistic, only tiny financial resources are needed to set up as a publisher. Do that, and you become your own editor, and achieve unfettered self-expression—with the added enticement that folks who do it successfully can become wealthy as a result. Who knows, even public honors and high office might follow.
To achieve financial advancement, today's economy offers few opportunities as promising for people of little means as internet publishing. That already-existing opportunity would be notably multiplied if giantistic media had not been encouraged legally to dominate the commercial space.
Which points to another general observation, one that gets almost no attention in present debates over internet policy. Free published expression does not come free. It incurs expense.
When folks sit at keyboards, and type without charge whatever they hope to see published worldwide, they have been gifted that freedom by others, upon whom they become dependent. The old rule continues in force that press freedom is really available only to someone who owns a press.
A less-noted but more important corollary is that even folks who own presses must somehow use publishing activities to raise money sufficient to defray publishing costs. Those include the cost to recruit an audience, the cost to curate the audience by expressive offerings the desired audience prefers, the cost to mobilize the attention of the audience as a product, the cost necessary to sell that product to would-be advertisers, the cost to collect money earned by those efforts, the cost to use that money to pay contributors, the cost to use that money to pay to distribute the expressive content, the cost to keep the lights on, and to make a profit.
Any would-be publisher who cannot get over that series of institutional bars is destined either to vanish as a publisher, or to fall under control of some other party—whether a government interest or a private one—which will limit the publisher's freedom to publish at pleasure. Thus, press freedom as a national institution depends on public policy to encourage and protect private publishing business models. And those models cannot protect press freedom unless they enable publishing-related activities to pay their own bills.
Before it was supplanted by Section 230, the practice of private editing prior to publication did a better job, and created more diverse opportunities, to protect press freedom than today's internet giantism affords. There is no reason not to go back to that prior practice. Recognition of novel AI-related challenges to publishing practice only serve to make the case more obvious. Those ambitious to advance AI technology would be wise to implement private editing prior to publication on their own, during whatever interval it takes to perfect their technology.
The degree of anthropometrism in these posts is high. EV seems to say that the creators of the AI must be held to the same standards as people who make utterances. Some problems are:
1) Did the AI know that the person injured was real and not fictional? A plaintiff would have to provide the entire conversation to show the context.
2) Is there a reasonable way for the creators to alter the AI to avoid the harm? EV’s arguments seem to make no distinction between a human speaker and the AI. The AI’s abilities are impressive but nowhere near as broad as those of a human. Who would have the burden of proof regarding the AI creator’s abilities to mitigate harm? EDIT: EV's earlier assertion that the AI could have fact checked itself is unsubstantiated.
3) The whole concept of reasonable can not be applied to a machine, and it does not follow that the AI creator’s reasonableness are linearly reflected in the machine’s output.
You have a tiger by the tail here EV in applying laws written for humans to a machine. To project the reasonableness back to the AI creators is shaky. And now that AIs can write code, we can have an AI that was created by other AIs. It would make a great law school project to think through how a new branch of the law as applied to machines should work.
The day may come in your lifetime when biology creates an intelligent animal, and then the law professors can debate how laws apply to animals, or to the lab scientists who created the gene modification for the animals.
Perhaps look back to classical SF novels. If an alien visits Earth, what laws apply to it?
Tuttle, perhaps becoming accustomed to publishing under Section 230 has impaired commenters' ability to understand EV. He seems not to have posited that AI output will enjoy Section 230 protection.
On that basis, either the AI development company will be the publisher of its output, some other operator of the AI will be the publisher of its output, some re-publisher of AI output will publish it, or the output will not be published. In all cases except the last, someone will be liable for damages if defamation happens, and given the conceded unreliability of the AI, reckless disregard and actual malice will be reasonable presumptions.
All the who-shot-John about the inability of a machine to form an intent has no place to land in that scenario. The putative publishers will be real people (or corporations) all the way down. Which is to say, real people without ability to assert any state of mind in their own defense, because the state-of-mind question was resolved by the reckless decision to publish the AI output.
I’m not certain why you think it should matter that the machine is an intermediary. Dog owners are liable for the harm done by their canines. Managers are responsible for their subordinates. Manufacturers of defective products are liable for the harms created by those. Removing AI from legal liability because “it’s a machine” and “but it’s really fancy and technologically advanced” upends a lot of theories of liability.
Some states follow the "one bite" rule for dogs. The first bite is not actionable unless there was reason to believe the dog was unusually dangerous. This rule corresponds roughly to a notice and takedown model in the online world.
Not really. The AI machine is analogous to the dog, not the defamatory article that was invented out of whole cloth.
I just happened to run into an article on copyright law that notes:
https://www.wipo.int/wipo_magazine/en/2017/05/article_0003.html “Take the case of Microsoft Word. Microsoft developed the Word computer program but clearly does not own every piece of work produced using that software. The copyright lies with the user, i.e. the author who used the program to create his or her work.”
While its true that the copyright office in the US so far has said it won’t grant copyrights to the output of AI to its user: it also didn’t assign them to the vendor of the tool.
The entity responsible for the content in Microsoft Word and Adobe Photoshop, etc. is the user. Its not clear what characteristics of a too magically changes that. The attempt to imply this is a different kind of tool which magically absolves the user responsibility and attributes it to the vendor (not merely claiming “design negligence” on the vendor’s part) needs to be argued in detailed precisely and not ignored or hand waved away.
The most crucial elements of the case needing to be made are glossed over. I’ve gone through in detail in other comments making the analogy to search engine output the user can’t predict. Differences from such a Section 230 case need to be specified in detail as to how the publisher of the software is suddenly viewed as the publisher of the output of the software when it isn’t for Word or a search engine. Maybe a case can be made: but all the hand waving and bait and switch so far hasn’t done it.
I suspect many AI folks who are at all interested in these issues will spot the same flaws within a few minute skim of the paper, even if lawyers seem to anthropomorphize and not spot all the subtle implications taken for granted regarding agency, or bait and switch between vendor and software.
Again: search engines simply pass along other people's content. They don't create their own content. Word doesn't create its own content. AI programs do create content.
The issue is agency. Humans are "creators" capable of agency.
Section 230 codified the reality that the only entity with agency involved in posting something to the net is the user. There aren't humans from the social media companies or search engines in the room, entities with agency, involved. AI seems to be confusing people into thinking it somehow has agency and can be held responsible (but then the sleight of hand where they then are forced to confront that they aren't humans who can be sued and they plug in the AI vendor, bait-and-switch).
Those who claim differently need to actually make an in depth argument instead of mere assertions.
RealityEngineer, ask yourself, "By what means does the output of the AI text generator get to any person who can read it?"
The relevance is that whoever owns or operates that means is highly likely to be adjudicated the publisher of the output, and thus liable for defamation if the content proves false and defamatory. The person in question could be the owner of the AI machinery, or it could be an online publisher such as a social media platform, or it could be another company in the publishing business. It could even be a software engineer who arranged for automatic publication of AI output without any other intermediaries.
What will not happen, absent passage of some analogue to Section 230 for AI, is a judicial consensus that conduct by identifiable people, however construed, which foreseeably results in publication of defamatory machine output, will get legal impunity.
I predict that if advocacy to extend Section 230 type protection to AI output gets congressional attention, passing it will be a heavy lift. Legislators will want practical demonstrations that such technology is not only defamation proof, but also hoax proof, politically inert, and generally incapable to inflict a dauntingly broad range of public evils.
When Section 230 came up for consideration, a notable constituency to suppress strict libel enforcement was already in place. Where and how could a similarly large political bloc be mobilized on behalf of AI technology online?
In case these are still viewed, GMU economist on a prior example where he suggest shifting responsibility to the users of a product was the better option:
https://marginalrevolution.com/marginalrevolution/2013/02/aviation-liability-law-and-moral-hazard.html
" Aviation, Liability Law, and Moral Hazard
by Alex Tabarrok February 19, 2013
By 1994 the threat of lawsuits had driven the general aviation industry into the ground. Cessna and Beech ceased production in the 1990s and the other major player, Piper, went bankrupt. The problem was caused by liability law and the long-tail. Cessna, Beech, and Piper had been producing planes since 1927, 1932, and 1927, respectively, and airplanes last a long time. Thousands of aircraft built in the 1930s and 1940s are actively flown today and the average age of the general aviation fleet (small non-commercial aircraft) is more than 24 years. Liability law also grew stronger in the 1980s and 1990s so aircraft manufacturers found themselves being sued for aircraft that they had produced decades earlier. Essentially, the manufacturers found that they could be sued for any aircraft that they had ever produced.
...My latest paper (with Eric Helland) just appeared in the JLE. We use the exemption at age 18 to estimate the impact of tort liability on accidents as well as on a wide variety of behaviors and safety investments by pilots and owners. Our estimates show that the end of manufacturers’ liability for aircraft was associated with a significant (on the order of 13.6 percent) reduction in the probability of an accident.
....GARA thus appears to be a win-win because it revitalized the industry and increased safety. The latter came, in a sense, at the expense of the pilots and owners who now bore a greater liability burden but they were the least cost avoiders of accidents. Moreover, the pilots and owners of small aircraft were big supporters of GARA thus suggesting strongly that prior to GARA liability law for aircraft had been inefficient and destructive."