The Volokh Conspiracy
Mostly law professors | Sometimes contrarian | Often libertarian | Always independent
Can You Sue Over Assurances Made by Company's Customer Service AI Chatbot?
Maybe, but not in this particular case, a federal court rules.
From Magistrate Judge Alex Tse's opinion today in Smith v. Substack, Inc. (N.D. Cal.):
Prior to the filing of this case, there was a series of interactions between Smith and the Doe defendant. Smith alleges that this unknown defendant posted unflattering statements about Smith on Cancel Watch, a blog site hosted by Substack. Smith initiated contact with Substack, including twenty to thirty "complaints and queries by email" between July and September 2023, all to no avail. Substack did not respond to any of Smith's emails regarding Cancel Watch.
In February, March, April, and May of 2024, Smith asked a series of questions to a chatbot found in the support section of Substack's website. Smith asked, "do you respond to complaints," and the chatbot responded, "Yes, we respond to all complaints." He also asked, "do you respond to every complaint," and "do you always [all of the time], respond to complaints?," to which the chatbot responded with the same answer or a very similar one. Id. Smith then asked, "does Substack respond to emails?" and "Will you certainly respond to emails?," the chatbot said, "Yes, Substack responds to emails" in response to both inquiries. Smith alleges that the answers from the chatbot are the same for "queries," and that Substack says it will respond to reports. However, regardless of its chatbot's replies, Substack itself never did respond to Smith's inquiries, or to his follow-up inquiries asking why the company was ignoring him….
Smith sued Substack under a promissory estoppel theory, which is related to breach of contract. No, said the court:
The chatbot's responses to Smith, however, are not sufficiently definite to give rise to a claim of promissory estoppel. The chatbot said that Substack would respond to complaints, emails, and queries. However, the chatbot did not say anything about how Substack would respond, or when. Without those essential terms, the Court cannot discern when Substack is in breach of its obligations. See White v. J.P. Morgan Chase, Inc. (E.D. Cal. 2016), aff'd (9th Cir. 2017) (finding that plaintiffs failed to allege promissory estoppel when the promises at issue "were fatally uncertain because they contained no essential terms"). Theoretically, Substack is still not in breach given that no timeframe for a response was promised….
Second, "detrimental reliance is an essential feature of promissory estoppel." Reliance might be found when a "promisee suffered actual detriment in foregoing an act, … or in expending definite and substantial effort or money in reliance on a promise." … Smith contends that he relied on the promises made by the chatbot because he had exhausted all other methods seeking a response from Substack. Smith states that his reliance "was under Substack's assistance or help" because the chatbot features in the support section of their website. Substack argues Smith has failed to allege detrimental reliance, as the [Complaint] does not state any facts showing that Smith changed his position or acted to his detriment in response on a promise from Substack.
In his opposition, Smith alleges that he filed a new criminal complaint on April 23, 2024, after receiving assurances from the chatbot. Smith would not have filed the new criminal complaint but for the chatbot's assurances that Substack would respond to emails and complaints. Several weeks later, after not receiving a reply, Smith "closed his complaint." Smith alleges that his time and police time were wasted, and he experienced emotional distress as a result. Additionally, he alleges nominal damages to recover for wasted expenses in connection with a phone call to the police.
In deciding a Rule 12(b)(6) motion, a court is limited to the complaint. New facts raised only in the opposition may be considered for the purposes of deciding leave to amend, but not the Rule 12(b)(6) motion. As such, in deciding the motion to dismiss, the Court will only consider the allegations in the [Complaint].
Smith's [Complaint] here fails to allege facts to show detrimental reliance. 5 Plaintiff merely contends that he relied on the promises of the chatbot and explains why he relied. But Smith does not allege what actions he took or did not take, or what efforts or money he expended in reliance. Without that, the [Complaint] does not contain any facts to show any actual detriment to Smith as a result of his reliance on the communications from the chatbot….
And the court declined to let Smith amend his complaint:
Reliance can be found when a party "expend[ed] definite and substantial effort or money in reliance on a promise." Smith alleges in his opposition only that he filed a new police report and then dismissed it, which wasted his time and led to emotional distress and a wasted phone call. These new allegations do not rise to the level of substantial effort or money expended in reliance on the chatbot's responses and were made in response to Substack's assertion that Smith failed to show reliance. Because these are the best facts (if taken true) in support of reliance that Smith can allege, another amendment would unlikely save the claim.
Moreover, Smith filed the new criminal complaint on April 23, 2024, but alleges reliance on chatbot responses in February, March, April, and May. To the extent that Smith claims to have relied on responses which came after he filed the new criminal complaint, Smith has still failed to allege any reliance at all, substantial or not. Thus, the Court finds that allowing another amendment would be futile.
Smith has already had three opportunities to plead these claims. In light of the previous amendments, the Court has broad discretion to deny leave to amend and does so. Smith's claim for promissory estoppel is dismissed with prejudice….
Benjamin D. Margo and Maura Lea Rees (Wilson Sonsini Goodrich & Rosati) represent Substack.
Editor's Note: We invite comments and request that they be civil and on-topic. We do not moderate or assume any responsibility for comments, which are owned by the readers who post them. Comments do not represent the views of Reason.com or Reason Foundation. We reserve the right to delete any comment for any reason at any time. Comments may only be edited within 5 minutes of posting. Report abuses.
Please
to post comments
Correct me if I'm wrong but doesn't breach of contract imply the existence of a contract - you know, the thing you agree to before you get the goods or services? Seems to me that relying on a chatbot's answers to decide whether you would agree to the contract that you agreed to long ago would require a time machine somewhere along the way.
Yes. The promissory estoppel doctrine is designed for situations where there isn’t a contract.
Promissory estoppel still requires a promise. And again, that promise has to be made before the decision you make "relying on it". I still don't understand how the timing is working in this case.
It doesn’t work, which is why the case got dismissed.
If companies want to use computers to replace humans then they need to accept the liabilities with the advantages. Can't just have all the advantages of having a computer represent them like a human would then dodge the issues when the computer messes up in a way they couldn't if it was a human. .
In fairness, the Support Chatbot Terms of Use already says in the Limitations section that "Our support chatbot is an automated service provided as a convenience. You understand that support chats may provide inaccurate or incomplete information, and are not a substitute for reviewing Substack's terms and policies. You understand that the support chat cannot speak for Substack, modify our terms or policies, or make any binding promises to you." I presume (but do not know) that it said something similar when Smith started using it.
If a human interaction came with the same disclaimers, I think you'd be on equally weak ground relying on it.
That kind of depends on how prominent the disclaimers were.
There was a case that made news about a year ago regarding Air Canada, in which its customer service chatbot told a prospective customer about a corporate bereavement fare policy. (Specifically, the chatbot said that customers could apply for a discount retroactively, whereas the formal corporate policy was that you had to arrange it up front.) Air Canada tried to argue that the customer wasn't entitled to rely on the chatbot's (mis)representation, but the Canadian court in question rejected that as absurd.
https://www.forbes.com/sites/marisagarcia/2024/02/19/what-air-canada-lost-in-remarkable-lying-ai-chatbot-case/
I'm not sure why the plaintiff didn't also ask Chat,
a. Approximately how long does this company take to officially respond to complaints?
b. What is the greatest amount of time you take to respond to complaints?
And why not also throw in,
c. You just told me that [for example] you always respond to complaints within 5 weeks. May I act in reliance on your promise to get back to me with a response within this 5-week time-limit?
If I were advising the defendant, I would suggest that it immediately change the language that visitors see before typing in prompts to the Chat.
a. By clicking on this, you understand and agree that Chat is doing its best to assist you. But Chat is not a live human being, does not officially speak on behalf of this company, and is not authorized to give legal advice or representations.
b. You agree that you will get *written* confirmation from an authorized representative before anything Chat tells you is considered to be an official representation or statement from this company.
-----------
-----------
I think that, in the near future; Chat may become so much a part of dealing with companies that Chat *will* be treated as equivalent to a live person.
(Today I had to contact Amazon, as an item I had ordered did not arrive. 4+ months ago, it was an easy matter to get a live human being on the phone when dealing with Amazon. Today, it was close to a nightmare. Endless prompts, asking (begging??) me to give the nature of my concerns today, then asking for more details, then a suggestion to deal with this online in Chat. Yuck!!!)
Plaintiff: I'm suing AI for making promises.
Defendant: you mean the system that makes up non-existent case law out of whole cloth?
But he didn’t sue an AI. He sued Substack Inc., a company.
If a company relies on an AI to represent it, it is bound by what the Ai says the same as any other agent. If it uses an agent known to lie, that’s the company’s problem. Here, Substack is fortunate that its AI agent didn’t make definite promises and the plaintiff disn’t actually lose anything by relying on them. If the situation were otherwise on both counts, Substack would be liable.
In many ways, having an artificial person to represent a corporation fits in pretty well with the corporation itself being a kind of artificial person. Agency rules ought to be pretty straightforward, the same as when another corporation is used as an agent instead of a natural person.
I don't like the argument that Substack's agent's promise could be satisfied a billion years in the future. Chatbot is talking to a human being using human language. Chatbot says emails are answered. If there's no answer in a day, not a lie. If there's no answer in a year, a lie. In between the jury can decide. Or the case can be dismissed because the promise is vague on other grounds.
I remember applying for jobs after seeing an ad promising all submissions would get a response. No response. I didn't think about suing. I don't know if employers make such promises these days.
I suggest that because a corporation is a kind of artificial person, corporate law and its law of agency provides a useful ready-made analogy for resolving disputes when using a more general class of artificial entities as agents.