The Volokh Conspiracy
Mostly law professors | Sometimes contrarian | Often libertarian | Always independent
The First Stab Theory
Last week and this, I've been serializing my Large Libel Models? Liability for AI Output draft. For some earlier posts on this (including § 230, disclaimers, publication, and more), see here; one particular significant point is at Communications Can Be Defamatory Even If Readers Realize There's a Considerable Risk of Error. Today, I turn to another argument against liability.
[* * *]
Perhaps the strongest argument for immunizing AI companies from libel liability is that AI program output can be a valuable first stab at the user's research. The AI provides a helpful tentative analysis—whether about a particular person, or about a broader subject—that can help guide the user in investigating further. AI companies shouldn't be chilled from providing such valuable initial information by the risk of liability.
This in some measure mirrors certain privileges, for instance the privilege for reports to law enforcement. If I think my neighbor is beating his children, I should be encouraged to convey my suspicions to the police, without fear of liability. Perhaps I might be carelessly misperceiving things. I might have not looked further into the matter to decide whether my initial perception was correct. I might well be wrong.
But that's fine, because it's the job of the police to investigate further. Better that they have as much input as possible (perhaps short of deliberate lies[1]), even if some of the input is mistaken, and mistaken in a way that could put an innocent neighbor in the uncomfortable position of being questioned by the police or even erroneously arrested. Indeed, some courts have recognized absolute privileges (precluding lawsuits even for deliberate lies) for complaints to certain kind of quasi-judicial bodies.[2]
Just as the police should have maximum input into their investigations, the argument would go, so each of us should be able to gather maximum information for own investigations of whatever interests us—not just through tools such as Google, which always point to real sources (and are immunized by § 230 for liability for falsehoods in those sources), but also through AI programs that might sometimes generate fictitious quotes and cite fictitious sources. Then we would have the responsibility to investigate the matter further before proceeding further, for instance by refusing to do business with someone because of what we've found.
I appreciate the force of this argument, especially since AI programs do seem to be potentially very helpful sources of information, despite their errors. It would be a real loss if their functionality had to be sharply reduced in order to prevent libel. And while negligence liability would in theory balance the loss against the gain, in practice it may be so unpredictable that many AI companies might err on the side of overconstraining them.
Nonetheless, I think that we have to acknowledge that, practically speaking, many users will view AI programs' output as the final step in some inquiries, not the first stab. That is especially likely if there's little at stake for the users in the decision (and thus little reason for them to research further), even if a great deal is at stake for the people defamed by an AI program's output.
Say, for instance, that an AI program becomes integrated into search results, and a search for Dr. Jane Schmane (to use a somewhat less common name than the usual Joe Schmoe) routinely yields a false quote reporting on her supposedly having been found guilty of malpractice. A typical prospective patient probably won't follow up to see if the quote is real, because there are lots of doctors the prospective patient could turn to instead. But the aggregate of these decisions could ruin Schmane's practice, based on users' understandable perception that, when an AI produces a quote, it's actually a real quote.
The absolute privileges I describe above also generally apply only under special circumstances that include "safeguards to prevent abuse of the privilege,"[3] such as
- means for punishing knowingly false statements (even if through criminal punishment for perjury or similar crimes rather than civil liability),[4]
- means for "strik[ing] from the record" statements that prove to be "improper," so that the false statements aren't further redistributed,[5] and
- the presence of an adversarial hearing at which the accused (or, better yet, the accused's counsel), could rebut the allegations.[6]
And qualified privileges are limited to situations where the statement is said to one of a particular set of narrow audiences for narrowly defined reasons;[7] if the statement is communicated to a different audience or for a different reason, it loses its immunity for being an "abuse of privilege."[8]
Of course, AI programs' output to their users lack any such safeguards. There is no risk of any punishment (criminal or administrative) when AI companies repeatedly produce fake quotes; civil liability is the only legal constraint on that. That which the AI says cannot be struck from the reader's mind—indeed, readers may well carelessly forward them to others. There is of course no opportunity for the target to rebut the false statement. And the AI program can, over time, distribute the statement to a potentially wide audience, even if just one query at a time.
So, again, I appreciate how AI programs can help users research various matters, including various people. In principle, if users generally viewed them only as a first stab at the end result, and wouldn't precipitously act on the initial response, it might make sense to create a new immunity for such programs. But in practice, I expect that users will often make certain decisions—such as a decision not to consider a prospective service provider or employment applicant—based just on the initial query. Indeed, the more reliable AI programs get (much as ChatGPT-4 has been reported to be much more reliable than ChatGPT-3.5), the more users are likely to do that. I therefore expect that AI programs' errors—such as fake reputation damaging-quotes—would seriously damage many people's reputations. And I'm therefore skeptical that it would be wise to categorically immunize AI companies from liability for such harms, if the harms could have been avoided using reasonable alternative designs.
[1] See, e.g., Cal. Civ. Code § 47(b)(5).
[2] See, e.g., Twelker v. Shannon & Wilson, Inc., 564 P.2d 1131 (Wash. 1977).
[3] Story v. Shelter Bay Co., 760 P.2d 368, 371 (Wash. Ct. App. 1988); Arroyo v. Rosen, 628 A.2d 1074, 1078 (Md. Ct. Spec. App. 1994).
[4] See, e.g., id.; Imperial v. Drapeau, 716 A.2d 244, 249 (Md. 1998); Restatement (Second) of Torts § 588 cmt. a.
[5] Story, 760 P.2d at 371.
[6] Imperial, 716 A.2d at 249; Arroyo, 648 A.2d at 1078.
[7] See, e.g., Restatement (Second) of Torts § 594 (1977) (protection of speaker's interest); id. at § 595 (protection of the listener's or a third party's interest); id. at § 602 (protection of the listener's interest, when the speaker and the listener have a special relationship).
[8] Restatement (Second) of Torts § 603 (1977) ("One who upon an occasion giving rise to a conditional privilege publishes defamatory matter concerning another abuses the privilege if he does not act for the purpose of protecting the interest for the protection of which the privilege is given."); id. § 604 ("One who, upon an occasion giving rise to a conditional privilege for the publication of defamatory matter to a particular person or persons, knowingly publishes the matter to a person to whom its publication is not otherwise privileged, abuses the privilege . . . .").
Editor's Note: We invite comments and request that they be civil and on-topic. We do not moderate or assume any responsibility for comments, which are owned by the readers who post them. Comments do not represent the views of Reason.com or Reason Foundation. We reserve the right to delete any comment for any reason at any time. Comments may only be edited within 5 minutes of posting. Report abuses.
Please
to post comments
"This in some measure mirrors certain privileges, for instance the privilege for reports to law enforcement."
With the obvious exception of Swatting where, "a person makes a false report to the police to make them startle, arrest, or even harm an unsuspecting victim," which can have deadly consequences.
"One high profile case involved a man in Los Angeles, Tyler Barriss, calling the cops on a man in Kansas, Andrew Finch. Barriss told police that Finch had killed a family member and held two others hostage — when the police arrived at Finch's house, they shot and killed him. Barriss was later arrested and sentenced to 20 years in prison for the crime."
My understanding is that there is not a uniform rule on civil liability for false reports, with some states making the report absolutely privileged and others allowing exceptions for knowingly false reports like swatting.
Luckily there's a federal law too:
The federal government and many states have laws that make swatting a crime. Under federal law, depending on the facts, a swatter can be charged with several possible offenses, including:
- "false information and hoaxes" if they call in something like a fake bomb, arson threat, or similar large-scale emergency to elicit a police response
- stalking if the swatting could cause someone significant emotional distress or if it places them in fear of serious injury or death
- internet threats for threatening to injure someone, and;
- wire fraud for relaying false information that causes a waste of law enforcement's resources.
Somehow, I thought this would be about the OJ case (maybe OJ didn't "Stab First"??) or maybe even the Manson case (Charlie really didn't stab anyone and was maybe 120lbs soaking wet, OK, could still stab you, but not exactly the most intimidating guy around)
If AI is to be a research aid it can provide references instead of information.
On the research angle, Microsoft is promoting AI powered Bing as the first real innovation in search engines in twenty years. If PageRank was the last big change, that was closer to 30 years ago.
I have asked ChatGPT in effect "show your work". I wonder if there's room for improvement in its current state, which has been caught creating a well-formed citation to a medical journal article which never existed.
I can, perhaps because I don't know how hard it is, imagine an LLM that responds to "citation needed" by crafting an elaborate search engine query on Google Scholar and returning valid doi links.
Why does this argument apply to AI more than it does for ordinary search?
E.g., suppose I search the name of some individual on Google. And for some reason, the algorithm displays as the top result a story that defames him in its headline. Say it says he was a wife-beater when he wasn't.
And yet, Section 230 and perhaps the First Amendment presumably protect Google, correct? What is different about AI?
It's why there aren't many "Adolf Hitlers" in the phone book (or Jerry Sandusky's)
There actually is a sportscaster named Gerry Sandusky who gets confused all the time with the bad one.
Used to pass gas for a surgeon named Charles Manson, who ironically, was responsible for more deaths than the famous Charles Manson
Why would a surgeon need you to fart for him?
Esper, never had a colonoscopy?
try getting Surgery without a Gas Passer (Hint, bring a bullet or a hammer)
Mengele: "or Jerry Sandusky’s"
Sandusky's what?
Not too many Mengeles, either.
The 1A does not protect Google in that context (except subject to the ordinary limitations of Sullivan and its progeny). § 230 does, of course. What’s different about AI is that Google is passing along someone else’s defamatory story, while AI is manufacturing the defamatory story out of whole cloth.
EDIT: To be clear, AI could be passing along someone else's story and then it would be immune; this entire series on the VC is about the scenario in which AI does invent new defamatory claims.
In theory, the damaged person can sue the actual publisher
Or put differently, if we don’t make the AI company pay, then de facto, the plaintiff absorbs those damages. There are no other candidates involved
"Just as the police should have maximum input into their investigations"
The police do not and should not -- the Fourth Amendment exists to rather explicitly ensure that they do *not*.
A useful analogy would seem to be things like drugs and alcohol where use of them can potentially cause harm. Libertarians argue that the damage done by outlawing is greater than the harm of them being legally used, and hold users responsible for any harm done when using those “tools” for recreation (or whatever). Holding users responsible deters drunk driving, and holding users responsible for spreading flawed content created by these will lead to awareness of their flaws and that users should pay attention to them.
Libertarians grasp re: drugs (as with alcohol) that pandoras box is open and people will get access to these tools, despite authoritarian drug warriors that don’t grasp how hard it is to shut down black markets and engage in wishful thinking.
In the world of LLMs: Pandora’s box is open: there are leaked models that are being adapted to be run on home or cloud hardware. China will create or steal LLMs and offer them. If they try to ban use of them (as they wish to ban TikTok): word of what VPNs are will spread faster than word of what ChatGPT was if it gets users access to what they want. It seems as with drugs, the more likely scenario if liability leads AI vendors to pull US ones off the market, the result will be less safe AI being used.
If they are shielded from liability and users held responsible for mistakes with their use (as users of alcohol are responsible for car accidents: not the makers of the alcohol or the automobile) the free market will work to create better alternatives. Instead this approach will squash innovation since the solutions require more R&D than is practical for startups.
It seems doubtful this approach will wind up happening in the real world. It ignores law review articles I’ve posted and hand waves away the core issues at hand regarding agency and how these tools are different from other software tools and somehow “agents” as is presumed when applying legal ideas applicable to humans to these programs (rather than legal ideas applicable to inanimate tools).'
The rapid rise of these tools, and projections by Goldman Sachs of potential $7 trillion rise in world gdp over a decade (even if their estimate is off) will inspire protections to be created for them, even if poorly reasoned legal arguments confused people into thinking AI vendors should be held responsible by dilettantes that imply there is design negligence when vast resources have been spent by different vendors and they all have the same issues. Yes, some will be confused by hand waving by dilettantes to think there may be alternatives, but if so legislation will protect them likely. Hatred of big tech and grandstanding might lead to regulations, but user demand will prevent them from being squashed entirely. (or again: they'd be used in home brew versions or via VPN to China or wherever).
Except of course it'd likely be regulatory capture which protects big players and squashes small ones that can't afford it. So pushing this flawed path may lead to regulations that slow innovation to provide more accurate tools.
Look up dram shop laws.
re: "dram shop laws"
er, so what. Its not clear what point you think that makes.
There is a long history of holding retailers responsible for third-party harms where that purveyor should have known of the likelihood, on top of holding manufacturers responsible for faulty products.
It's not clear to you, because, despite expounding for hours on this topic, you don't know anything at all about law. Dram shop laws involve holding sellers responsible, along with users, for users' misuse of alcohol.
Yes: minors. Its not clear how the existence of one example somehow implies its necessarily the case that its what will or should happen in this case. Its you folks that seem to be unable to make any actual argument rather than making an unjustified assertion. That was the point of saying "so what" : but unfortunately the concept went over your head apparently as it has done repeatedly.
No, not minors. (I mean, yes, minors, sure, but adults also.)
Also, you should avoid saying, "It's not clear" when you mean "I'm not smart enough to understand."
re: "Also, you should avoid saying, “It’s not clear” when you mean “I’m not smart enough to understand.”
No: the point is that you didn't make any actual argument and I see no reason to assume there is one without it. I can speculate as to the possible flawed argument you might have intended, but its not clear which flawed argument you were trying to imply. You still don't make one. The burden of proof in all of this on those who claim there is a problem the legal system needs to deal with. Mere handwaving and implications of certainty don't cut it.
Put information in computer. Computer put information back out. Amazing.
“these tools are different from other software tools and somehow “agents” as is presumed when applying legal ideas applicable to humans to these programs (rather than legal ideas applicable to inanimate tools).”
No, these tools are not fundamentally different from other software tools, and they should not be legally treated like humans in this or that sense. Not sure which way you are arguing here.
re: "No, these tools are not fundamentally different from other software tools,"
In that case they are like photoshop and MS word and social media sites where the user is solely responsible for the content they create using those tools. You neglect to notice that people are arguing they are somehow different.
The user is not solely responsible for the output of a LLM. They didn't provide the input used to train it, they didn't specify the architecture of the model, and they are not the ones executing it.
Your eagerness to excuse the LLM creators and operators makes you post ridiculously wrong ideas. Please stop.
Maybe a core LLM software, before any "input" and without any access to the internet or any outside information, could be likened to photoshop or word. A program in such a state, if asked about any person, place, or thing, would initially know nothing and presumably respond, "I don't know."
Otherwise, ChatGPT and the like as they currently exist are more comparable to a search engine (not the same but more of an apt comparison than photoshop). If they wanted to avoid liability, maybe these chatbots could act more like a search engine, providing an external source for everything that it says, and phrasing answers as "according X and Y sources ..."
re: " If they wanted to avoid liability, maybe these chatbots could act more like a search engine, providing an external source for everything that it says, and phrasing answers as “according X and Y sources …”
That isn't the way this tech works. Again, the user is the only human involved in using this tool.
Again, you're wrong. You don't understand how the tech works.
The odds are I have more of an understanding of the tech than anyone I’d seen posting here aside from one other on my side who mentioned training models like this (though even the top folks creating it don’t know in some senses how it does), I’ve worked in the field in the past. However I needn’t demonstrate that since the burden of proof is on those claiming there is a problem the legal system needs to deal with is to actually make an argument. Those who actually know how the tech works grasp of necessity need to grasp basic logic and can spot those who hand wave and avoid doing so.
Without liability there's less incentive to make them "safe" in the first place, though.
Maybe people missed this:
https://www.npr.org/2023/02/09/1155650909/google-chatbot–error-bard-shares “Google shares drop $100 billion after its new AI chatbot makes a mistake”
These companies have incentive already to produce more accurate tools, if nothing else to compete with each other. There is no need for some sort of misguided attempt to provide incentive via this legal approach. The missing incentive is for users to learn not to believe everything they read, so holding them liable will help impart that lesson. Again: its a useful one to learn in general, people fall for lots of junk. I guess the issue is that there is self interest bias in attorneys hoping to find a way to go after deep pockets and yet pretend they are doing good for the world by doing so, whether the world of hundreds of millions of users (and growing) thinks so or not.
Authoritarians often think they are forcing people to do something for their own good and rationalize it that way. Who cares if this AI,even flawed as it is, can help educate third world people and its development should be sped to improve their lives, or to help improve the healthcare industry in this country to save lives. Its too important apparently to push the idea that users can’t possibly be held responsible for their own actions with the tools they use, that elitists need to take those tools away since they define humans as apriori incapable of even being allowed to choose to take full responsibility for their beliefs, thought crime needs to be prevented apparently. Yet not important enough to read other law review articles that already addressed relevant issues (given there aren't many on the topic), or learn about the tech in depth before pretending to have a useful critique of its design, or to provide a detailed argument as to how these tools are apriori somehow different from other tools when trying to apply theories other than design negligence.
A preview of "we're gonna destroy section 230 unless you censor harrassment, oh, and start with the harrassing tweets of our political opponents right before an election!"
"Sir, yes sir!"
There is no need for some sort of misguided attempt to provide incentive via this legal approach.
RealityEngineer, maybe no need for incentive. But there is pressing need to use protection against the damage done while Google or anyone else experiments on the public. Protection will not prove hard to provide.
Do not claim license to publish lies under Section 230. Rely on human judgment. Hire trained editors. During whatever interval prototyping requires, edit prior to publication whatever output the AI serves up. Human editors will prove surprisingly efficient, because their only task will be to identify whether the possibility of libel exists, not whether the output is the truth. For shorter outputs, that will be the work of a few seconds each.
Of course that will not rocket the stock price back to outlandish heights. More the opposite. But that’s a good thing. It reinforces the incentive you say already exists.
When the AI no longer needs human supervision, then it will have earned the high stock price—which of course was previously based on a public misconception that the technology would work reliably, or at least be treated as if it did.
I am concerned with lawyers seeking to funnel money into their pockets. Like qualified immunity, it is presented as if they are getting away with it, when it’s just blocking access to government deep pockets in civil suits.
There may or may not be real issues, but we need to keep an eye on motivations. “It works in synergy with justice!” Riiiiight.
Hundred billion dollar (e’en trillion) corporations doing the real and rare work of advancing tech, and a half-parasite class finds a target rich new thang to attach to.
Self-driving cars are slower to adopt. Keep in mind that if they are better than humans right now, we should replace human drivers with AI and save lives by the thousands. Sadly, AI will make mistakes, and lawyers will sue, hurting the cause and delaying rollout, nevermind the common man they bleat they care about continues dying at higher rates because the lawyer's dad didn't use a condom.
So, yes. Keep an eye out.
>When the AI no longer needs human supervision
There is reason to suspect that it's an "If" rather than a "When".
My friend, after a couple of weeks of reading your streams of unvarnished hucksterism laced with even more unvarnished belittling of your audience (including in this very post!), for you to write something like this is a really amazing exercise in projection.
I don't know if you're making yourself feel better grinding out these screeds, but the last thing you're going to do with that sort of approach is change anyone's mind so it's not really clear why you persist. I'm certainly not going to expend any more of my time trying to engage in a dialogue with someone who is clearly engaged in a monologue.
If whatever stock shares/options you may hold are taking a beating due to the increasing public awareness of the serious and foundational limitations of this technology, that's a real shame. But again, you're not going to reverse that dynamic by castigating people for not having the sense to understand what's really best for them and trying to bully/shame/cow them into shutting up with their pesky concerns so you and your buddies can get back to your "we're saving the WORLD, yo" ivory tower unbridled by any concerns about the knock-on effects of your shiny toys.
Peace out.
re: quoting my statement using an analogy to "authoritarians": the reason for that is this is a libertarian site. People are familiar with the thinking patterns of those that fall into authoritarian traps and it seemed perhaps making the analogy might cause people to step back and consider if perhaps they were falling into the same mindset.
re: "unvarnished hucksterism laced with even more unvarnished belittling of your audience"
There is no "hucksterism" and I own 0 stock in any relevant entity at the moment. Vast numbers of people say they are useful, even if some who post here (don't recall which ones have taken this attitude) try to imply they are of little use since they aren't accurate.
I've been accused of being a bot and get all sorts of replies that neglect to deign to even bother to make an argument and yet imply they are authorities that should be accepted without question. I belittled folks that continued juvenile attacks and seemingly neglected to bother learning more about the tools they are attacking, which perhaps you might label "hucksterism" due to the implication that they are useful.
>These companies have incentive already to produce more accurate tools, if nothing else to compete with each other.
These companies have incentive to produce accurate tools only to the point where the cost of increasing the accuracy doesn’t exceed the benefits in the market of having a more accurate product. If they aren’t liable, the harm to third parties won’t be priced in to this calculation; harm of $X to third parties does not mean that the product will be worth $X less in the market.
Notice that your space telescope example didn't nontrivially harm a third party. The kind of errors in question here do.
Major advancements that drive humanity forward, like AI or driving cars, are of astounding benefit to humanity, well beyond their market value.
If this analysis is our yardstick, lawyers suing and slowing that should be jailed because they cause deaths by slowing advancement.
Not all external factors are priced into products. The indirect benefits from the use of AI were estimated recently by Goldman Sachs to potentially add $7 trillion to world GDP over the next decade. Regardless of whether that is accurate or not: whatever huge benefit isn't going into the pockets of the AI companies. There are benefits as well as harms that aren't being priced in. Unfortunately that is sometimes the case in the real world.
The point is the providers have incentive to produce better products. In contrast: users don't have incentive to fact check content under a legal scheme where the AI vendors are held solely responsible for the content rather than the users.
If users find that approach costly: that adds indirectly incentives for the AI vendors to do things better.
As a side benefit to society teaching them to fact check in general would be useful.
re: " If they aren’t liable, the harm to third parties won’t be priced in to this calculation;"
Once again: vast numbers of car accidents each year are allowed to happen since manufacturers are allowed to have cars that aren't perfectly safe to bystanders, to avoid them costing $1 billion per car or taking decades for the tech to be developed to make them perfectly safe to bystanders or whatever other unrealistic option there is. The harm isn't priced into their calculation since they aren't being held negligent for designs that aren't fully safe that would not be realistic to do since they'd cost too much or deprive users of functionality.
The world makes tradeoffs for products.
"Not all external factors are priced into products."
And yet some are, and imperfect laws are still applied. You might not like how liability or defamation law works, but if you want to change either, spouting denialist theories on a "libertarian site" is hardly the way to change it. If you think automobiles should have a different regulatory environment, you can lobby the federal government to change those rules. If you think defamation law should be rewritten, understand how existing law works and propose a cohesive modification to or replacement for it. In either case, you must first understand the current rules.
re: "If you think automobiles should have a different regulatory environment, you can lobby the federal government to change those rules. "
I don't think they should. Its you folks that are implying that somehow AI is guaranteed to be held liable: when cars aren't. I see no evidence any of you folks making assertions without arguments have much if any grasp about liability laws, or if you do you seemingly lack the ability to make actual complete arguments rather than merely hand waving assertions that take vast amounts of assumptions for granted that aren't warranted.
For those used to the sort of full logical arguments required for software: its like dealing with the South Park underpants gnomes who are oblivious to their missing step in an argument and don't grasp they need to provide it.
Its those claiming these companies are liable who need to provide a full complete logical argument: those who dispute it merely need to point at the existence of those holes: but apparently the holes are too subtle for you folks to grasp.
Is someone funding this stuff, or does this guy genuinely believe this shit?
Chatbot query: Has RealityEngineer proved himself a helpful spokesman to advance public acceptance of an AI publishing industry?
I was referring to the original post, not to anything RealityEngineer has contributed.
The comment is probably equally applicable to RealityEngineer's output, too, though.
So... everyone has to click a TOS or EULA before using AI. But how does that protect the person being libeled? That person didn't agree to the terms.
Does this transfer full liability to the user of the AI? Will that person really be guilty of libel if they didn't know the AI was wrong?
In 1974, the inventors of the Altair 8800 advertised the first home computer kit based on the new Intel processor, showing a pucture of a prototype they had come with. Orders piled in. They used the money to figure out how to actually manufacture the thing in quantity, soource suppliers, build a factory, figure out a production line.
Do legislatures have no right to forbid this? If you want to sell something to the general public, as opposed to experts, you need to have worked out basic kinks and created something you have some confidence actually works.
Legislatures would be entitled to impose testing requirements on software, just as they do on airplanes, drugs, and more. Remember Thoramide? Remember the 737 Max? Companies don’t necessarily get off with a “sorry, it was just a first stab.”