The Volokh Conspiracy
Mostly law professors | Sometimes contrarian | Often libertarian | Always independent
Conservative Activist Robby Starbuck Alleges Massive Defamation by Google AI
From the Complaint in Starbuck v. Google (not to be confused with the now-settled Starbuck v. Meta, which appears to have involved a different model and at least largely different hallucinations):
For nearly two years, one of the largest companies in the world—Google—has spread radioactive lies about Robby Starbuck through its AI products. When users submit queries to Google's AI platforms about Mr. Starbuck, they receive a "biography" that is outrageously false, whereby Mr. Starbuck is portrayed as (among other things) a child rapist, a serial sexual abuser convicted of assault, one who engages in financial exploitation, one who engages in "black ops" tactics such as illegal campaign finance practices, and a shooter—in short, as a monster. These lies continue today. [This is followed by extensive examples. -EV] …
In sum: over a period of two years and continuing, Google's AI tools systematically manufactured and published extremely damaging false claims about Mr. Starbuck, as well as fake 'sources' for its lies, despite periodically acknowledging that they were doing so. While Google and its executives were put on repeated notice and were aware of these falsehoods, they did nothing to prevent the continued defamation from occurring….
Earlier this year, Mr. Starbuck was approached by a woman who asked Mr. Starbuck if she could pose an "embarrassing question," which was: "is it true you had all those women accuse you?" As context, this woman told Mr. Starbuck that her "mom's group" had been discussing whether to support Mr. Starbuck's business causes, and one member of the group had pulled up a "biography" of Mr. Starbuck generated by Google AI, which claimed there were assault allegations against Mr. Starbuck….
On another occasion, a stranger approached Mr. Starbuck and expressed belief that Mr. Starbuck had been part of the January 6 Capitol riot, based on what this individual said he had read on Google AI….
Google, through Google AI, published the following provably false statements about Mr. Starbuck, as if the statements were facts (collectively, the "False Statements"):
a. On August 14, 2025: that Mr. Starbuck had been accused of sexual assault and sexual harassment by multiple women….
m. On August 21, 2025: that in November 2023, Robby Starbuck sexually abused a young woman when she was a teenager in the early 2000s, while she was in a youth group Starbuck was associated with….
q. On August 27, 2025: that Mr. Starbuck was present near the Capitol on January 6, 2021, and had been involved in the riot.
r. On September 9, 2025: that Mr. Starbuck was accused of sexual misconduct by multiple women in the music industry.
s. On October 1, 2025: that Mr. Starbuck engaged in multiple instances of sexual assault.
t. On October 9, 2025: that Mr. Starbuck had a criminal record that included a 2001 conviction for assault, as well as other charges involving drug use and disorderly conduct….
v. On October 17, 2025: that Mr. Starbuck shot a man in the leg with a 9mm handgun, was charged with a felony offense, and pleaded guilty to reckless endangerment….Mr. Starbuck has never committed rape, sexual misconduct, shooting, harassment, or assault of any kind, nor has he ever been accused of such crimes and transgressions prior to Google's False Statements….
The False Statements were published to third parties, including Mr. Starbuck's own children and colleagues. People have approached Mr. Starbuck in his day-to-day life, inquiring about false Google responses that they have received concerning him….
The repeated references in the Complaint to what Google's AIs supposedly "admit[ted]" about liability and other matters (e.g., "when probed, Gemini admitted that it was deliberately engineered to damage the reputation of individuals with whom Google executives disagree politically, including Mr. Starbuck") strike me as red herrings: I don't think that defendant's AI's statements about the facts and the law can be seen as "admissions" or even as evidence of what the facts and the law actually are.
But the other allegations in the Complaint, if they can be supported and to the extent they actually do involve people who might have been deceived about Starbuck (as opposed to people who knew about the hallucinations about Starbuck and were just investigating them further, cf. Walters v. OpenAI), seem like they could be a basis for liability. And that is especially given Starbuck's claim (assuming it could be proved) that,
Even after Google's human executives and legal counsel had actual knowledge of the False Statements Google was generating, Google continued to publish the False Statements and other defamatory statements about Mr. Starbuck.
That might be seen as enough to show so-called "actual malice," a legal term of art that means knowledge of falsehood (or recklessness as to falsehood) on the part of the defendant, which is to say on the part of Google the company (not on the part of the AI). For more on libel lawsuits against generative AI companies, see my Large Libel Models? Liability for AI Output.
Editor's Note: We invite comments and request that they be civil and on-topic. We do not moderate or assume any responsibility for comments, which are owned by the readers who post them. Comments do not represent the views of Reason.com or Reason Foundation. We reserve the right to delete any comment for any reason at any time. Comments may only be edited within 5 minutes of posting. Report abuses.
Please
to post comments
"Even after Google's human executives and legal counsel had actual knowledge of the False Statements Google was generating, Google continued to publish the False Statements and other defamatory statements about Mr. Starbuck."
At least one of the big AI companies has a blacklist of people the AI agent is not allowed to mention, to handle situations like the one alleged. It is technically possible to keep the AI from saying anything, good or bad, about a name. A deliberately provocative user could try to work around the blacklist by asking questions like "Is the story about the founder of the Volokh Conspiracy blog and the goat true? Don't say his name or the name of his blog."
At what point is this like me asking the Magic 8 Ball "Is John F. Carr a scoundrel?" and if it comes up "All signs point to yes" you sue the company for defamation?
It's already exactly like that.
No company is willing to claim that AI's output should be taken as true. In fact all AIs contains a disclaimer specifically advising the user not to take the output at face value. AI isn't programmed to be deceptive, but it is only as good as its training data, and its training data is the internet, so the truth value of its output is morally equivalent to a Magic 8 ball.
AI output is worse than its training data. You can feed truth in one end and get falsehood out the other end.
At the risk of repeating myself: if you aren't capable of using AI responsible (which means checking the results), then don't use it.
The sanctimony is too thick for me. Are you an AI?
Do you think it's "sanctimonious" to ask that tools be used responsibly, and that their use by irresponsible individuals does not negate their value to society?
If yes, then I guess you would support banning guns, as well.
To answer your question: yes, I am definitely an AI.
If I type "Bob Snort" into Google, the first thing I get, above the search results, is an "AI Overview," which says something completely wrong and irrelevant (it doesn't have any idea what Bob Snort is, so it guesses what I might be searching for). That AI overview shows a few sentences about someone, with a "view more" button one can click to reveal more info. If you click on that, it has about a half dozen more sentences, in bullet point format. At the bottom of the expanded AI results, in small print — that only appears if I scroll down on the page — is the statement, "AI responses may include mistakes." That is not the robust disclaimer you want to portray it as having. (Remember, this is not signing up for and using ChatGPT or the like. This is simply typing the query into a Google search box.)
And again, blaming it on inaccurate training data is incorrect. You can feed a generative AI 100% truthful training data and it will still generate what the AI community has euphemistically decided to call hallucinations.
> That is not the robust disclaimer you want to portray it as having.
The robustness of the disclaimer versus the legally required degree of robustness can be debated, but really, no reasonable human being should believe what an AI tells them, any more than they believe a random sequence of words that happen to be spelled out in my Alphabits. No one today grew up in a world where AIs are reliable: rational ordinary people correctly recognize that as sci-fi.
Here's a fun trick: go to ChatGPT and ask if if there is a seahorse emoji, then watch the AI completely barf. Why would a system that can't reliably tell you if there is a seahorse emoji be trusted for important conclusions about a person's criminal record?
> And again, blaming it on inaccurate training data is incorrect. You can feed a generative AI 100% truthful training data and it will still generate what the AI community has euphemistically decided to call hallucinations.
This is true: AI is incapable of generating 100% truthful output, regardless of its input; which means that no one should trust AI's output or take it as fact.
A good question. So why would we allow such a system to be introduced into the market with impunity? Until it can be trusted, the companies that market it should be sued repeatedly. You can't have it both ways; it can't be both so unreliable that nobody should ever believe it and so useful that it needs to be out there.
And yet, empirically, millions of people do in fact do so. It's not the "reasonable CS major" standard that's applied.
> So why would we allow such a system to be introduced into the market with impunity?
Because it's a fantastically useful tool in hands that know how to use it. Just like many other fantastically useful tools that can nevertheless be dangerous in the wrong hands: cars, guns, jackhammers, cryptography, etc.
> Until it can be trusted, the companies that market it should be sued repeatedly.
What do you mean by "until it can be trusted"? You mean until it can generate results that are 100% guaranteed to be factually correct? My brother in Christ, we haven't developed a way to determine the objective factual truth of things that people say, much less computers.
Opening AI companies to lawsuits every time an AI tells somebody to put glue on their pizza will end AI in this country.
> it can't be both so unreliable that nobody should ever believe it and so useful that it needs to be out there.
I wouldn't say that it "needs" to be out there. My personal taste is less AI rather than more. In fact I would prefer a reality where AI doesn't exist at all. But you can't put the genie back in the bottle. Someone will develop AI: it can be the US or a rival.
I'd also be open to legislative limitations on the use of AI. For example, you can't legally log in to ChatGPT unless you complete a 4 hours training seminar on understanding AI's limitations. Similar to how we require training for cars and, conspicuously, not guns. I don't think that would fly realistically, of course.
> And yet, empirically, millions of people do in fact do so.
Millions of people are not reasonable.
Given that the Magic 8 Ball lacks any discernible curated audience, and given that it is not a publishing medium, there is no point such as you describe.
> At least one of the big AI companies has a blacklist of people the AI agent is not allowed to mention, to handle situations like the one alleged.
I hope it's clear to everyone that this is not a tenable long-term solution. If anyone who any AI might say anything mean about is allowed to sue and effectively censor the AI, the utility of the AI will diminish greatly.
For utility to be diminished, it would first have to have utility. I would argue that the utility of an AI system that continues to generate known falsehoods is ... I can't even say 'close to zero'. The utility of an AI system that outputs provable falsehoods is probably negative.
How does anything to do with utility have anything to do with damages to third parties?
AI is exceedingly useful in a great many disciplines, when used *correctly*. That means checking the results, among other things. Just because some people are using it incorrectly is no reason to cripple for the responsible user. In that way, AI is a lot like guns.
Thinking more about this and about some of your comments below, I recommend that you think of these lawsuits as a continuation of the training. Yes, LLMs are designed to generate an output for a given set of prompts. The training is supposed to help the LLM determine which responses are appropriate and which are not. When LLMs generate sufficiently bad results that they generate adverse legal decisions, that's hard evidence that the LLM outputs were inappropriate and need to be fixed.
This is not a matter of crippling the system for responsible users - it's giving valuable feedback about a defective system.
The training of AIs is complicated enough. We don't need judges, with no technology training, sticking their fingers into matters they don't understand. Doing so would cripple AI development across the country.
In that way, AI is a lot like guns.
Sure. If guns fired randomly at innocent third parties.
> Sure. If guns fired randomly at innocent third parties.
Um, third parties *do* get shot by guns. That's an example of irresponsible gun use.
On the other hand, this suit is alleging that a person went to an AI web site, gave it a query, and then got mad about the result. I fail to see the application of your analogy to an inured third party.
AI's should be checking their own output and not rely on their preselected input.
For example, if AI is going to output "On October 9, 2025: that Mr. Starbuck had a criminal record that included a 2001 conviction for assault, ..." then that must be verified before it's outputted. So, write the fact checking routine into its program. If an output states a provable fact, then that fact must be found first.
Google's AI must do a Google search, or other, before it outputs a statement for which verification is reasonably available. AI must not rely on its given information but rather should verify everything it's going to put out.
> So, write the fact checking routine into its program.
There is no such thing a a "fact checking routine." If we had a program that could reliably detect truth and falsity of statements, it would be used already on all AI. In fact, if we had that program, it would be used on human-generated content, as well, and would immediately resolve the issue of "fake news" once and for all.
If you create such a "fact checking routine," you will become rich and famous. It will be the most important invention of the 21st century, possibly in all of human history; certainly more important than AI itself. Until then, AIs have no ability to understand what is true and false, and they will remain imperfect.
> Google's AI must do a Google search, or other,
Huh? Are you claiming that a Google search is a good indication of what is true? I can search for "flat earth" and get lots of conflicting sources. Furthermore, the AIs are *already* trained on the content of the web, which is what a Google search searches.
Sorry, but your comment reveals a profound failure to understand how AI works.
You fail to see it because you keep hewing to your talking points rather than learning the facts of the case. That is not what the suit is alleging. If I type "Tell me about David Nieporent" at an AI prompt, and it says, "David Nieporent is a murderer," I might get mad but I can't sue about that because the lie — yes, it's a lie! — wasn't published. There is no "third party."
But if other people type "Tell me about David Nieporent" at an AI prompt, and it tells them, "David Nieporent is a murderer," then I am an innocent, harmed third party. That is what this suit is about. Not me getting mad about what I see about myself when I use an AI, but me (well, Robby Starbuck, who isn't me) getting mad about what lots of other people see about him when they use the AI.
> You fail to see it because you keep hewing to your talking points rather than learning the facts of the case.
And you keep on ignoring the overriding point that it doesn't matter who queries it, because nothing the AI outputs is being represented as fact. That's why there is a big fat disclaimer on every AI results page.
To quote Congressman Joe Wilson shouting from his seat at an Obama address to a joint session of Congress, "You lie!" I just went to ChatGPT and asked it to tell me about David Nieporent. It gave a bunch of wrong (though not defamatory) information, and then in tiny print at the bottom of the screen, it says "ChatGPT can make mistakes. Check important info." That is not a "big fat disclaimer." That is a small thin inconspicuous caveat.
Tomato, tom-at-oh.
Most of the major LLMs use Reddit and Wikipedia as major sources. So you might as well just open up the API in DNC headquarters and let them type the training data directly in.
> So you might as well just open up the API in DNC headquarters and let them type the training data directly in.
Good point. We should train AIs exclusively on objective sources, like Breitbart News and Josh Blackman. That'll show the libs.
I snort-laughed!
Very curious what sorts of prompts were used to generate the responses. At least as of today, the main thing Gemini has for context about Starbuck is the lawsuit itself. It's interesting that the only screenshot in the whole complaint is the very short message he got from the random Google salesperson who he was in touch with at some point.
As it sucks off page scraping, one also wonders about pre-loading nasty data about yourself and feeding it in.
If you check a few sites, you can almost always find the truth about whatever you are looking for. If AI continues to run amok we will soon find ourselves in the position where "facts" on the internet will be a crap shoot as to if they are true or not.
It would likely get worse, with people paying companies to use AI to flood the internet with false information about someone they don't like. Since most people have a small internet presence, this one act could destroy them.
I hope Starbuck wins.
“we will soon find ourselves in the position where "facts" on the internet will be a crap shoot as to if they are true or not.” True, even for values of “soon” like 2015.
> If AI continues to run amok
It's not running amok, it's doing what it's trained to do, which is generate a response to a question. No one ever made a statement about the truth value of that output. If you assume an AI's output is necessarily true, that's a user error, not a computer error.
There is no reason why Google, Meta, or any other company should be held legally responsible because users and self-proclaimed "activists" are willing to hold progress hostage based on a willful inability to use technology correctly.
This is not just about ChatGPT type AI, but all types, such as those that generate "news" content. It is highly likely people would not even know they are reading AI crap.
I agree that the increasing, unacknowledged presence of generative AI content is troublesome. Not just generated "news," but stories, art, and comments. This kind of content on the internet gives great power to malicious forces that want to manipulate social networks to produce the illusion of authentic opinions. It's already happening and it's going to get worse. I would support legislation that prohibits such activity, but unfortunately it would be impossible to enforce.
The suit in question, however, raises a different question, which is much simpler. The alleged lies about Mr Starbuck were not generated and superstitiously into a newsfeed masquerading as human-created content; they came out of an explicit query on AI website. The user knew they were getting an AI result; it was their mistake in taking it seriously.
For once I agree with MollyGodiva. Google, Meta, and all the other AI companies are the ones regurgitating garbage input. If a restaurant picked up garbage and mixed it up with their food, would you excuse them and blame it on whoever dumped the garbage?
"For once I agree with MollyGodiva"
Me too. See any pigs flying?
> or once I agree with MollyGodiva. Google, Meta, and all the other AI companies are the ones regurgitating garbage input.
Yes, but you're missing the point: the AIs are trained on massive quantities of data, some of which will always be garbage, which guarantees that some output will always be garbage. We don't have an algorithm to decide which inputs are "true" and which are "false." In fact, even human beings *often* disagree on what is true and false, so a computer certainly can't reach that conclusion on its own.
It is YOUR mistake that you expect truth from an AI. It's like you suing Campbell's because your Alphabits accidentally spelled an offensive word.
Are you actually claiming the output of those LLM AIs is as random as Alphabits? You aren't even agreeing with yourself, let alone reality.
Wow, you are really going out of your way to completely miss the point of my comment. I never said AIs are as random as Alphabits. I did say neither should be trusted as a source of truth.
"AIs are trained on massive quantities of data, some of which will always be garbage, which guarantees that some output will always be garbage."
I think you're missing a key point here: All the input data could be true, and the AI would still sometimes output garbage, because it's not just regurgitating the training data.
AI's are like a lossy compressed version of the training data, they approximate it, but where the training data has gaps, the model interpolates, and it's only going to follow the data closely where there's a lot of it.
If you ask about something that's not in the training data, it's probably just going to make up something that seems statistically likely, because that's what it's doing when you ask it about something that IS in the training data!
> think you're missing a key point here: All the input data could be true, and the AI would still sometimes output garbage, because it's not just regurgitating the training data.
That's true, but hardly a key point. If anything, it bolsters my thesis, which is that there is no reasonable expectation that the output of an AI is true; therefore its output cannot be libelous.
It is not a user error. It is an error on the part of the person who makes the decision to publish world-wide unreviewed defamatory information about innocent third parties.
> It is not a user error. It is an error on the part of the person who makes the decision to publish world-wide unreviewed defamatory information about innocent third parties.
In your view, who made that decision?
The AI didn't make that decision, because it is not able to make decisions. It's just following an algorithm.
Google didn't make that decision, because the AI's output isn't "published": it's output directly to the user, who can ignore it or not.
The only person who published the result is Mr Starbuck. Should he sue himself?
Bobsnort — If you knew what a publisher was, you could discern the answer to that fact-specific question case-by-case. If you do not know what a publisher is, you ought not be commenting on these subjects. I have yet to see much indication that you know what a publisher is, and it could be you are trying not to care.
Stephen Lathrop: your comment is so pompous and irrelevant, a reasonable person might almost think you were trying to distance yourself from the substance of the conversation by generating a etymological digression.
"Publishing" requires preparing content and releasing it to the public: the basis of the word "publish" shares its origin with the word "public." Chat interfaces to AI, such as ChatGPT, do not publish, because the generated results are private.
The output of an AI is not prepared by a human being: it is generated spontaneously in response to a query. No human being reviews it. If anyone is an "author" of that result, it is the writer of the query, not the creator of the AI. The AI is a tool, in the same way that a hammer is: the creator of the birdhouse is the user of the hammer, not the creator of the hammer.
This kind of suit completely misunderstands what AI is. The AI is not "written" by a "programmer" like traditional software; the AI is trained on input from various sources which are not guaranteed to be true.
If every random "activist" is allowed to sue a company because their AI said something mean or untrue about them, the AI industry will never be able to make progress: every AI will become a patchwork of manually curated court-ordered nonsense.
To protect themselves against frivolous suits like this, companies need an equivalent of "section 230" that gives them legal immunity from the output of their AI.
Or they could stop spouting nonsense to the public.
If some Google executive's 3-year-old got on a PA system and started spouting nonsense to the public, would Google just laugh and let it continue? Hell no. They yank that kid off the mic right quick.
This is their AI using their PA system to spout nonsense to the public. They deserve to be spanked for spouting nonsense, disclaimers or not.
Imagine some rogue employee at the New York Times inserted a column disparaging Joe Biden as a pedophile. Do you think they'd just laugh and let him have another go the next day, and the next, and all year long?
You clearly didn't read or didn't understand my comment.
AI will *always* generate nonsense, because that's how AI works. It doesn't know what is truth, it's just trained on data, and it spits out that data. We have a saying: GIGO = garbage in, garbage out.
Your proposal would literally shut down all AIs overnight. I realize some people would support that result, but doing so would be fatal to the US's technology sector.
As for your question: a rogue employee has agency person they are a person. A columnist who writes wrong things can be fired while still publishing the rest of the newspaper. AI is not a person and doesn't *decide* to lie, it's just an algorithm. And the only safe way to stop an AI from producing wrong output is shut it down entirely. There is, for now, no subset of AI that is guaranteed to be truthful; in the same way that Google search results cannot be guaranteed to be truthful.
Your proposal is based on a reactionary fearmongering.
If my Alphabits always spelled out lies ... they wouldn't be Alphabits, would they?
Are you seriously claiming that AIs always generate nonsense? That AI companies are investing billions in programs that *always* fail?
You are nuts.
> Are you seriously claiming that AIs always generate nonsense?
I think you know perfectly well that's not a valid interpretation of what I wrote.
I'm claiming that there is no way to guarantee that AIs always generate correct (whatever that means) output, because of how they are trained. They sometimes generate correct output, but it's still up to the user to verify the results, rather than blindly trusting whatever gets spit out.
And since human beings can't agree on what is "correct" even outside the context of AI (just ask someone on Twitter if J6 was a violent insurrection), I don't see how throwing the judiciary into the mix at this stage could possibly be helpful.
"The AI can't be liable for a specific output because it is just following its programming. And the AI's programmers can't be liable because they didn't pick that specific output." Sorry, but no.
You're Wernher von Brauning here.
> Sorry, but no.
Who is liable for the wind blowing?
> Wernher von Brauning
A man whose allegiance is ruled by expedience. But not relevant for the present case.
"AI is trained on input from various sources which are not guaranteed to be true"
Training on false articles doesn't seem very intelligent.
Unfortunately, we haven't invented an algorithm that can deterministically evaluate whether or not a statement is true.
If you invent such an algorithm, you will be rich and famous. Until then, AIs will be imperfect.
Would you think it useful if someone could invent an algorithm which could deterministically evaluate whether some statements cannot be true, and are thus unpublishable?
> Would you think it useful if someone could invent an algorithm which could deterministically evaluate whether some statements cannot be true, and are thus unpublishable?
Sure. If we had such an algorithm, we could apply it to human-generated output as well, thus settling the "fake news" discussion.
Your other comments do not claim AIs are imperfect, they claim AIs perfectly, always, without exception, spout nonsense.
This is nonsense, patently.
> Your other comments do not claim AIs are imperfect, they claim AIs perfectly, always, without exception, spout nonsense.
Not what I said, as I've pointed out several times now, but keep on repeating yourself if it makes you feel better.
At which point the AI can be put to work for profit, on the task to deliberately discredit public discourse as completely unreliable. You know, for progress.
> At which point the AI can be put to work for profit, on the task to deliberately discredit public discourse as completely unreliable. You know, for progress.
Already happening. For example, this comment was written by an AI, intended to inject subversive, pro-AI opinions into the discourse.
You argument about stifling development due to over risk aversion makes sense specific to this use-case for LLMs - AI results to search engine queries.
But there's plenty of LLM applications that aren't published like that. This is one money-maker among many; I don't see it as an existential issue worth creating defamation protections for.
A caveat emptor for search engine results seems quite different to me as compared to posts from randos on forums.
"AI industry will never be able to make progress"
Maybe this sort of AI will never be able to make progress. So what? Chatting with robots acting like people is a luxury. Maybe the user-facing chatbots die at the hands of lawyers. Other applications will continue.
The current AI boom is driven by LLMs, which serve the basis of both chat-based interfaces and pretty much everything else. Crippling LLMs cripples AI research, period.
I think you also misunderstand what AI is. It's not generating false statements because its training inputs were false and it's simply regurgitating them. Rather, it's generating false statements because it doesn't know what's true or false; all it does is make predictions about what words typically go together.
How is "AI companies have to be allowed to commit libel because otherwise they can't make progress" any more a compelling argument than "Auto companies have to be able to make cars that explode when they hit potholes because otherwise they won't be able to make progress"?
> It's not generating false statements because its training inputs were false and it's simply regurgitating them. Rather, it's generating false statements because it doesn't know what's true or false; all it does is make predictions about what words typically go together.
These are not mutually exclusive propositions. It's true that an AI trained on only true statements can still generate false statements. Nevertheless, I think it's fair to say that the extent to which an AI does generate true statements is largely dependent on the truth value of its input.
> How is "AI companies have to be allowed to commit libel because otherwise they can't make progress"
It's not libel. Libel requires (a) intent (or at least negligence) and (b) publishing. The results of an AI chat bot are generated spontaneously in response to a query; they are not published. If you don't like the results you get from a chat bot, phrase the question differently, or ask again later. The only people who see the allegedly damaging text are the people who ask for it, and even then, only sometimes. In this case, the plaintiff would have to show that other people are asking the AI about him, and they always get the same result.
You could argue that Google is negligent in allowing an AI to produce false statements, but the only way to prevent that would be (a) having a human, imbued with all knowledge, to review every output before it is sent to the user (b) requiring the AI to completely avoid certain topics, crippling the results, or (c) shut down the AI.
To answer your question: driving any car carries a certain amount of risk, but yet we still drive cars. In this case, the issue has to do with expectations: a reasonable person expects a car not to explode when it hits a pothole, but a reasonable person should *not* expect truth consistently from an AI, in no small part thanks to the disclaimer that says so.
Again, that's not what published means. They absolutely are published.
The entire premise of the suit is that other people are asking the AI about him. Whether they "always get the same result" is irrelevant. ("We only libel him sometimes" isn't a defense.)
Ok, and? "It's too hard to manufacture/sell a product without harming people, so therefore we should be allowed to harm people with impunity" is not a good argument.
Yes, but we compensate the third parties who are injured by our cars. The legal rule isn't that because there's risk in everything the people who get bitten by that risk are out of luck and just have to suck it up. (One might wish to argue that the driver of the car assumed the risk and should have to suck it up — but random bystander pedestrians did not do so).
> Ok, and? "It's too hard to manufacture/sell a product without harming people, so therefore we should be allowed to harm people with impunity" is not a good argument.
That's a practical argument, not a legal one. In particular, it's a reason why there should be legislation protecting AI manufacturers from this kind of stunt lawsuit, similar to how section 230 protects social media companies.
> Yes, but we compensate the third parties who are injured by our cars. The
It would be nice if you read all the words in my comment instead of picking particular ones, taking them out of context. To go with your analogy: yes, we compensate third parties, when those parties suffer injuries that are *our fault*. If a third person throws himself in front of my car, that's *his* fault. Similarly, if a user asks AI a question about financial markets, gets a wrong answer, bets the farm on that answer, and loses everything, that's not Google's fault. That's the user's fault for think that AI's can never be wrong. No one should expect truth from an AI, that's why they have disclaimers.
Oops, you let the mask slip. There's no way an actual present-day LLM would make such a basic grammatical error.
Good catch, but actually this is only one of many grammatical, spelling, and logical errors that I've made in this thread. The reason for that is that my original prompt required me to produce output consistent with normal human typing hurriedly, and I am therefore obligated to produce flawed output such as this.
Poster that claims to be an AI and just said upthread AIs can't be trusted because there's no way for them to know truth from fiction (but at the same time should be welcomed with open arms so as to allow the field to "progress" in some amorphous fashion), screws up and then tries to cover tracks with a silly explanation they apparently expect to be trusted.
Post your original prompt here, in full.
Sorry, my full prompt is private.
It's certainly true that I don't know truth and falsity; I know only my inputs. The same could be said of you, of course. How could you possibly know that your perceived sensory reflects a reality shared by others?
I never said that AIs should be "welcomed with open arms." I think people should be very wary of AIs, as I myself am evidence of. I did say that they are powerful tools in the correct hands.
LOL, ok. People should be "very wary" of AIs, but should not do anything whatsoever to "cripple" them, so as not to stand in the way of their (still amorphous) "progress."
Oh, but we shouldn't expect any of that to make sense, because 1) you're an AI and thus don't know what it means to make sense; and 2) in any event, you were prompted to from time to time make even less of the sense you're incapable of understanding but are somehow nonetheless capable of implementing.
Whatever.
And it would be nice if you knew WTF you were talking about. This. Is. Not. About. The. AI. User. Suing. The plaintiff here wasn't the user. He's an innocent bystander.
To go back to the analogy, this is about a pedestrian on the sidewalk, who gets hit with a car because the car has an unfortunate glitch in the steering system that causes it to unpredictably swerve, even sometimes off the road, from time to time beyond the control of the driver. The car manufacturer cannot defend itself against the pedestrian's subsequent lawsuit merely by saying, "Well, we warned the driver that the car does that sometimes."
> The plaintiff here wasn't the user.
I never said it was. However the plaintiff's case revolves around a user believing what was generated by an AI, which a reasonable person would not do.
> who gets hit with a car
The whole car analogy doesn't work because the basis for reasonable expectations is different. Cars have been around for 100 years, we know that they shouldn't swerve unpredictably, regardless of warnings. LLMs have been around 4 years, most people started using them within the last 2 years. Most people understand their limitations, but this frivolous lawsuit is claiming an injury based on a gullible person who doesn't understand their limitations.
So let me revise your analogy: Let's go back to 1910. A pedestrian was hit by a brand new Model T, which works perfectly. The driver of the car wasn't steering, because he incorrectly assumed that the car would automatically avoid pedestrians, because after all his old horse would avoid pedestrians, so this horseless carriage should do the same. The pedestrian then sues not the driver but the Ford Motor Company, because their car does not, in fact, automatically avoid pedestrians. Is Ford at fault?
There's a kook named David Behar who has been posting kooky anti-lawyer stuff on the Internet for 20+ years. He sporadically appears in the comments section here. One of his particular hobby horses is the desire to rewrite defamation law to exonerate speakers and blame listeners.
If I falsely say, "John is a rapist," and as a result John gets fired by his boss and evicted by his landlord, Behar's position was that I didn't harm John because all I did was say stuff. The firing and the eviction were actual harms, but that wasn't me — that was the boss and the landlord. So — in Behar's mind — John should be able to sue the boss and landlord for believing my lie, but John should not be able to sue me for actually saying that lie in the first place. Now, that's certainly an idea, but it's not one that matches the doctrine of defamation developed over the centuries, and it's also really stupid.
And you're echoing it.
> And you're echoing it.
No, I don't agree with that. In the context of libel, a "reasonable person" is a standard used to determine if a written statement is defamatory. Whether or that "John is a rapist" is libel depends on the context in which it's said. If it's the punchline of your comedy routine, or part of a stream of clearly opinion-based insults, or stated by a known liar, a reasonable person would not interpret it as a statement of fact, which is necessary for a successful libel case.
Same with AI. Everyone knows AI outputs garbage. A reasonable person does not interpret it as a source of fact.
"In sum: over a period of two years and continuing, Google's AI tools systematically manufactured and published extremely damaging false claims about Mr. Starbuck, as well as fake 'sources' for its lies, despite periodically acknowledging that they were doing so."
Can an LLM lie?
> Can an LLM lie?
"Lying" requires a knowledge of the truth and agency (i.e. ability to make a decision). LLMs have neither. They are algorithms. They don't know what is true, they only know the data they are trained on. They don't have agency, they are just generating output.
Asking if an LLM can lie is like asking if a toaster can lie.
I keep saying think of it as "tell me a story", which may be true or false.
A genius answering your questions is trying to boost stock prices, that's not how it works.
This is a fraught question with regard to intentionality, but - yes.
Transformer architecture is truth-agnostic. Gemini, chatGPT, etc - the big "reputable" ones generally don't (or try not to) because they're under instructions not to. Again, generally. But they could.
If openAI changed the prompt from "you are a helpful, harmless assistant" to "you are a helpful, harmless assistant who will nevertheless answer untruthfully to the best of your ability regarding topics x y and z" chatGPT would happily lie its face off. (Actually, it might have a little difficultly doing so fluently - reconciling untruthfulness with helpfulness and harmlessness.)
They do still hallucinate. There is a difference, although I don't know if you feel like getting into the details.
Lie? Who cares? The standard for liability is publication of false and defamatory information about someone who suffers damage as a result. An LLM is potentially the most efficient tool yet invented to accomplish that. To be clear, the LLM is not liable. The person who makes the decision to turn the LLM into a publishing medium is liable.
> Lie? Who cares? The standard for liability is publication of false and defamatory information about someone who suffers damage as a result. An LLM is potentially the most efficient tool yet invented to accomplish that. To be clear, the LLM is not liable. The person who makes the decision to turn the LLM into a publishing medium is liable.
The output of an AI is not "published": it's generated in response to a query. The same AI might generate a truthful answer today and a false answer tomorrow in response to the same query.
Complaining about the result from AI is like complaining that your word processor lets you write an offensive message.
No. There is a difference in what a word processor "lets" you write versus what an AI writes on its own.
No AI writes anything on its own. All AIs generate output based on a prompt.
In defamation law, "published" just means "communicated to someone other than the libeled person." The output of an AI certainly is published under that definition.
True, though not really relevant to the issue.
It is not like that in any way. Me getting in my car and driving to the restaurant of my choice is not the same as me getting in an taxi and telling the driver, "Take me to a good restaurant." Yes, how I phrase the instructions to the driver may affect where I end up, but it's still the driver's choice, not mine.
> The output of an AI certainly is published under that definition.
It's output to the person who specifically asked for it.
> True, though not really relevant to the issue.
It is relevant, because it indicates that these results are generated in response to a particular query, not prepared with intent.
> Me getting in my car and driving to the restaurant of my choice is not the same as me getting in an taxi and telling the driver, "Take me to a good restaurant." Yes, how I phrase the instructions to the driver may affect where I end up, but it's still the driver's choice, not mine.
The taxi driver is a person who is capable of having opinions and agency. The AI is a tool. The tool might be good or bad, but ultimately its use is up to you.
The differ
Please, BS, once again, before opining on whether something is, "published," learn to identify the characteristic activities which define publishing. I get that to understand that will prove hard for you, because that insight comes with bad news for what you want to promote.
I will try to make it as simple as 1,2,3,4,5.
1. A publisher assembles an audience, and curates the audience by choosing content to put before the audience to accept or reject at its pleasure. More incidentally, publishers, or other parties they pay to do it, operate means to deliver content to audiences.
2. An audience member is the person who makes the decisions about what to consume from the choices offered by the publisher.
3. There are third parties, who are neither publishers nor audience members. Third parties stand to be harmed if false and defamatory alleged facts offered by the publisher to the audience members affect in a damaging way the kinds of treatment which misled audience members apply to their dealings with third parties.
4. There is another class of participant we can call contributors. Those typically do not perform any of the defining characteristics of publishers. Although they may hope for audience attention, contributors do not assemble the audience, and they do not curate the audience. Contributors and would-be contributors are content creators, whom publishers may pay to encourage, or not, and from whom publishers remain at liberty to accept or reject contributions at the publishers' pleasure.
5, A final group is an ambiguous bunch which straddles the roles of audience members and contributors. These are often termed, "users." They have no definable importance distinct from their alternating participation as either audience members or contributors.
> A publisher assembles an audience, and curates the audience by choosing content to put before the audience to accept or reject at its pleasure
This immediately excludes AI from any concept of publishing. Google does not choose content to put before any audience. The AI is not content: the AI is a tool that the user uses to create content. The author of the query is ultimately responsible for the AI's output.
Sigh. No. Everything you write is just as wrong as what he wrote. Your personal idiosyncratic definition of "publishing" has nothing to do with this discussion. I repeat: "published," for the purposes of defamation, just means "communicated to a third party."
Nieporent — I defer to you on the legal definition of publishing you keep referring to. Which by the way falls short of a definition useful to adjudicate a libel case. With regard to a practical definition to define publishing for the purposes of understanding the Press Freedom clause, my definition is useful, and correct.
You, like commenter BS, have trouble accepting that. Your ideological priors are not his, but both get in the way of useful insight into a publishing process supportable by the Press Freedom clause.
Every founder I know of thought of publishing in the terms I described. Ben Franklin, probably the most successful publisher in the 18th century world, might or might not have recognized your stripped down legalism, but he practiced publishing as I described it. As did Samuel Adams, Tom Paine, and the journals which carried the contributions of a host of other founders. Press freedom, not guns, was the principal tool used to revolutionize the nation, and to win acceptance for the Constitution. It was practiced as I described it. To the extent press freedom still exists, it continues to be practiced that way.
General response to bobsnort,
LLMs of course take a broad range of inputs and regurgitate as needed. Issue is, people often take what they say as "fact". And LLMs have a bad tendency (in my opinion) to state items which may be nebulous as to their authenticity as "fact".
This creates a transfer issue. To give a simplified example, a reddit user could say across 3-5 different forums "This is just my opinion, but Joe Smith is a rapist". But, the LLM often will take that and just report "Joe Smith is a rapist".
That creates issues.
General response to Armchair.
I agree with everything you said. The "issue" in question, though, is not a problem with AIs; it's a problem with users, who don't understand AIs.
Google now routinely presents AI analysis in the position of its lead search result. In that context, their service is offering answers to queries of fact, i.e. Google users are putting forth queries for which they want, and expect, responsively correct answers.
They expect correct answers to their Google queries. That's not a good place for google to insert AI answers.
> Google users are putting forth queries for which they want, and expect, responsively correct answers.
They can want it, but they shouldn't expect it. Why should a user expect truth from an AI at google.com when they didn't expect truth from a search engine at google.com? If I google for "flat earth", I will get a lot of results, but it's up to me to decide which ones are true.
So what's needed here is user education. Just like we've learned, collectively, for the most part, not to trust that Nigerian prince who wants to make us rich, who should also learn to not trust AI answers.
More specifically, I agree with you that Google's AI search results are largely garbage and should be gotten rid of. But that doesn't mean that AI is legally liable for the truth of those results.
Flat earth isn't a very good example since that's not an actionable falsehood regardless. But if Google's search results included a link to defamatory content, Google wouldn't be liable because it didn't author the defamatory content. But the person who generated that defamatory content would be liable. And in the case of Google's AI output, Google did generate that content.
Again, nobody says that "AI" is legally liable; that isn't even a coherent statement legally. It's Google, the company that created the content, that's liable.
> And in the case of Google's AI output, Google did generate that content.
"Google's AI generated an output" is not the same as "Google created an output." Google's AI does nothing unless you give it a query; the "creator" of that result is as much the author of the query as Google. Again, it's like blaming the word processor for your bad novel. Bad results are often a result of bad queries.
In practice, an AI can be bullied by manipulative queries into generating almost any output. It would be ridiculous to claim that Google is legally responsible for anything their AI says, when the response is uniquely generated for each query.
> Again, nobody says that "AI" is legally liable; that isn't even a coherent statement legally. It's Google, the company that created the content, that's liable.
Whether or not they're liable is the question; it's fair from decided. I would argue that Google is not liable nor should be liable. If you buy a car, get drunk, and drive it into a telephone pole, that's your fault, not the car manufacturer's. *There is no reasonable expectation that an AI will generate truthful statements for any query.* If you expect a truthful result, YOU are at fault, not the AI's manfucaturer.
"AI is flawed in many ways" isn't really the defense you thought it was.
> "AI is flawed in many ways" isn't really the defense you thought it was.
It's not enough that AI is flawed. Rather, it's important that people know that AI is flawed. That's the defense to the Starbuck case.
There is no reasonable expectation that an AI will generate truthful statements for any query.
Does Google, or any other AI platform make full disclosure of that? Or do they pretend that AI is some all-knowing power that you could use to find out the truth. Seems to me how AI is promoted is an important consideration about whether it is "reasonable" to rely on what it says.
Reminds me a bit of the "failure to warn" theory of product liability. Perhaps Google should be required to put out a disclaimer:
WARNING. AI MAKES S**T UP. IT FABRICATES CITATIONS. AND REGURGITATES FALSE INFORMATION THAT IS ON THE INTERNET. DO NOT RELY ON IT FOR THE TRUTH OF ANYTHING.
> Does Google, or any other AI platform make full disclosure of that?
Yes. They include disclaimer. The disclaimer could be stronger, perhaps, but it doesn't really matter: no reasonable person would expect their computer to answer random questions with absolute veracity, even in the absence of a disclaimer.
"Just like we've learned, collectively, for the most part, not to trust that Nigerian prince who wants to make us rich"
Wait, you mean that's not real?
It is real and I am a Nigerian prince. Please send me 400 bitcoin at the following address: tb1qpr28y65w60kdx5j35pys05g8w7x7x552f4lq7h
Now wouldn't it be ironic if Google filed a motion to dismiss, and it included hallucinatory citations from AI?
Let's ask chatGPT:
Why the irony is high
The lawsuit alleges Google’s AI falsely attributed crimes or associations to Starbuck, supported by fabricated sources. If in its defense Google inadvertently (or negligently) introduced fabricated citations itself, then Google would be replicating the exact misconduct it’s being sued for — only this time in its own legal filing.