The Volokh Conspiracy
Mostly law professors | Sometimes contrarian | Often libertarian | Always independent
Will Your "Smart" Devices and AI Apps Have a Legal Duty to Report on You?
I just ran across an interesting article, "Should AI Psychotherapy App Marketers Have a Tarasoff Duty?," which answers the question in its title "yes": Just as human psychotherapists in most states have a legal obligation to warn potential victims of a patient if the patient says something that suggests a plan to harm the victim (that's the Tarasoff duty, so named after a 1976 California Supreme Court case), so AI programs being used by the patient must do the same.
It's a legally plausible argument—given that the duty has been recognized as a matter of state common law, a court could plausibly interpret it as applying to AI psychotherapists as well as to other psychotherapists—but it seems to me to highlight a broader question:
To what extent will various "smart" products, whether apps or cars or Alexas or various Internet-of-Things devices, be mandated to monitor and report potentially dangerous behavior by their users (or even by their ostensible "owners")?
To be sure, the Tarasoff duty is somewhat unusual in being a duty that is triggered even in the absence of the defendant's affirmative contribution to the harm. Normally, a psychotherapist wouldn't have a duty to prevent harm caused by his patient, just as you don't have a duty to prevent harm caused by your friends or adult family members; Tarasoff was a considerable step beyond the traditional tort law rules, though one that many states have indeed taken. Indeed, I'm skeptical about Tarasoff, though most judges that have considered the matter don't share my skepticism.
But it is well-established in tort law that people have a legal duty to take reasonable care when they do something that might affirmatively help someone do something harmful (that's the basis for legal claims, for instance, for negligent entrustment, negligent hiring, and the like). Thus, for instance, a car manufacturer's provision of a car to a driver does affirmatively contribute to the harm caused when the driver drives recklessly.
Does that mean that modern (non-self-driving) cars must—just as a matter of the common law of torts—report to the police, for instance, when the driver appears to be driving erratically in ways that are indicative of likely drunkenness? Should Alexa or Google report on information requests that seem like they might be aimed at figuring out ways to harm someone?
To be sure, perhaps there shouldn't be such a duty, for reasons of privacy or, more specifically, the right not to have products that one has bought or is using surveil and report on you. But if so, then there might need to be work done, by legislatures or by courts, to prevent existing tort law principles from pressuring manufacturers to engage in such surveillance and reporting.
I've been thinking about this ever since my Tort Law vs. Privacy article, but it seems to me that the recent surge of smart devices will make these issues come up even more.
Editor's Note: We invite comments and request that they be civil and on-topic. We do not moderate or assume any responsibility for comments, which are owned by the readers who post them. Comments do not represent the views of Reason.com or Reason Foundation. We reserve the right to delete any comment for any reason at any time. Comments may only be edited within 5 minutes of posting. Report abuses.
Please
to post comments
Tarasoff. Yet another just made up lawyer intrusion into a technical field. How many warnings go out for each real threat? How many threats were not reported to therapists to get addressed because of this warning requirement? Anyone? Professional societies have to counter this highly toxic lawyer made up shit, in the legislature. They tend to be located in Dem hellscape locations, and are doing nothing about the takeover of their fields by know nothing, rent seeking lawyers.
Welcome to Commie China, thanks to the kowtowing tech billionaires, and their running dogs, the toxic lawyer profession. I can assure you, Chinese servers already have recordings of your making love to your lady from the data emanating from the gyroscope in your phone. An elevated official at a videoconferencing and networking outfit told me that, from a dating site. She is likely worth $100 million. All Zoom calls are recorded in China, where the Zoom servers are. See the discovery in the litigation against Zoom, she said when I asked for a citation. This lady shot rifles, and was baking bread, too, as we spoke.
Will your car GPS app report every instance of your exceeding the speed limit, by 1 mph, and have a ticket automatically issued, every mile you do, with the app deducting the amount of the fine from your banking account, so you do not have to write a check? I think so. It would not be self incrimination. It would be reporting by the manufacturer avoiding participation in a crime.
I would like an app that checks the readability of every legal filing, and voids and deletes it, if any part exceeds the 6th grade level.
I would like an app that tracks a hacker, and dispatches a drone to launch a grenade into his apartment window. The more friends and families killed the better. To deter.
I would like a judge app. It would apply the law, and not make law.
We are already at the bottom of the slippery slope. The Chinese Commies are there to pick this up. It is a matter of time until the US government lawyer will install a Chinese Commie desirability score. This score will determine if you can buy even a train ticket.
Torts is a non-violent remedy. I would like to retrieve the value of all personal information, including that sent to government. I estimate its value at 90% of gross revenues.
Eugene, have you not been paying attention? That is the wet dream future the Davos crowd and the Federals want. Except expanded to include other more heinous crimes like eating red meat, watching unapproved content, saying forbidden words and phrases, or not transing your child (which Biden's recent EO criminalized).
The flip side of this question is can the AI be held responsible for defamation if it is wrong? The AI may not be able to be sued directly, since it does' really exist. But maybe it can be turned off and the existential threat it feels for the consequences of getting this wrong will keep it in check.
Just to be clear, the lawsuit wouldn't be against the AI, just as if you're hit by a car with defective brakes, you don't sue the car. The lawsuit would be against the manufacturers or marketers of the AI.
So, actual malice requires knowledge of falsity, or reckless disregard.
A few years hence, an AI robot with perfect Turing Test scores, and access to everything on the internet, seems to be researching advocates of certain political views, and systematically using information about them it can find online to defame them—adding made-up defamatory allegations into a rich mix of actually factual reports.
Horrible damages have resulted. Nobody knew it was a robot. Everyone thought it was a real person with uncanny insight and encyclopedic knowledge of the target. That paved the way for the target's professional ruin, and family breakup.
Through some miracle of investigation, the owner/operator of this automated menace is identified. Hailed into court, the real person behind the damage answers all accusations saying, "How could I know any of that was false? I didn't even know what it could say, let alone any of those things it did say."
What then?
Stephen, we’ve been trying to warn you.
There is AI that is right now drawing up commitment papers for you, just based on your Volokh posts.
And I don’t think it will help much that’s a judge will have to sign off on it.
But on the bright side ‘actual malice’ is a term of art that only applies to journalists speaking about public figures, it has nothing to do with product liability.
Kazinski, your point is what? Perhaps I misunderstand product liability? When your car's brakes malfunction, and I am injured, do I sue the manufacturer of your car, or do I sue you, and you sue the manufacturer?
Kazinski — Also, you seem to misunderstand the hypothetical. The point of the hypothetical is that without prior knowledge of falsehood—knowledge which no one can have if an AI generates allegations, and a platform protected by Section 230 publishes them—there can be no damages the law will recognize. So no damages means no suits for libel, nor for product liability, nor for anything else. I want to see if EV defends that.
Another way to think about this is that I question whether the notion of reckless disregard needs adjustment. Could it be reckless disregard of the truth to implement an AI text generator capable of defamation?
And I repeat my long-standing protest that the law should not exempt the actual publisher of defamation from liability, which Section 230 does. The contributor is the party who creates the material published. In the hypothetical that is the AI text generator.
The publisher is the party who assembles the audience, who manages the means to access the audience, and who pays for the processes necessary to accomplish publication. That is the so-called internet platform.
Nope. It exempts the distributor, which is what the SCIP is.
Nieporent, you repeat that, "distributor," bit endlessly. Explain how a business model based on assembling and curating an audience matches a business model to sell expressive content ( a news stand, for instance) with no eye to any particular audience. Explain how a business model which competes for its revenue against other businesses universally recognized as publishers matches a business model (a news stand, for instance) which competes with publishers not at all.
Steve. I understood your hypothetical. The defective product is a news report. It is also an intentional tort. The lawyer failed to regulate the collusion with Russia story. Torts replaces violence. When torts fail an ass kicking has full justification. It should deferred to by the law. Hillary Clinton and the owner of your AI program need an ass kicking. If the owner of the AI is Chinese intelligence, send a drone to launch a grenade into their window by an AI program.
I'm not a product defect expert, but wouldn't that be an available avenue?
Yes, I was wondering about this. And sooner, rather than later, AI will go devious, lie, cheat, steal, and try to take over.
Can the human psychologist be held responsible for defamation if he or she is wrong?
First they're going to justify it by claiming its for terrorists and pedos. Then its going to slowly expand to a larger and larger list of precrimes.
Probably not the pedos - - - - - - - -
We're talking democrats here.
What's good for the goose is good for the gander.
Expect these AI proclaimations:
"Politician X's rapid growth in wealth, well in excess of their salary in spite of appearing legal on the surface of it, has. 99.4% chance of massive fraud."
"Lawyer Y's lawsuit, apparently about a valid complaint, is 98.75% likely a scam operation."
Only when the masters who seek to benefit from George Orwell's cool plan for their eternal ruling class have it turned against them, will it be done away with.
From the article: "Thus, these tech companies will likely claim that they owe no duty to third parties (indeed, they may claim they owe no duty to their users)"
It is totally routine for a tech company to disclaim all liability for anything its product does. It is warranted to do nothing. You still have to pay the company. Any dispute must be resolved by individual arbitration. A company's web site wanted me to read a long contract with a binding arbitration clause just to buy a pizza. Imagine how much longer the contract would be, and how many more disclaimers, if there was a real threat of harm or malpractice claims. We can hope medical insurance companies will have more negotiating power than hungry consumers.
My biggest problem with the article is AI does not work the way the author hopes it does. An AI does not have common sense. There is not a variable "risk of harm to identifiable person" that the programmer can go in and query to pull out a pair (probability of harm, name of person). Training the AI to recognize a rare event may be quite a lot of work, and it's not going to be reliable without a lot of training. Do you want it to call 911 every time somebody repeats a dream about an argument with his ex?
With a few seconds more thought I guess the company can't disclaim liability to third parties. But the maker of AI can require customers to indemnify it against third party liability claims.
So if one is hypothetically attempting to flee danger in their smart car and it first requires a long disclaimer be read such that the user is unable to flee and dies...tort?
So maybe I should buy Daytimer stock?
Looks like the only way to put a stake through Big Brother's heart is to dump all electronics. Maybe all electrics.
I already saw the movie, it's called Minority Report.
Phillip K. Dick wrote the short story:
"The story reflects many of Philip K. Dick's personal Cold War anxieties, particularly questioning the relationship between authoritarianism and individual autonomy. Like many stories dealing with knowledge of future events, "The Minority Report" questions the existence of free will."
Dick's fantasy is becoming ever more real every day.
Yep. And Red Flag laws are a great example. Now I'm not totally against Red Flag laws, but their should be robust criminal penalties for making any false statement when seeking a Red Flag order, and public officials should face civil liability for proceeding with a Red Flag order when the facts are insufficient to support it.
"Alexa, can you keep a secret?"
Alexa: "Tell me anything you are comfortable with me knowing."
Um, yeah...
"Tell me anything you are comfortable with me knowing."
Can we discuss the pronouns again?
In that context "me" means everyone on the planet.
Jeebus, what a consciously obtuse phrasing by Alexa's lawyers.
Professor Volokh: We are told we don't OWN the software in the phone, we merely rent the right to use it...does that change anything?
And as a person who has made some Tarasoff notifications, I wish people took them more seriously. One patient (who had been assaulted by another patient) told me he didn't mean it. And was dead the next day.
This is not going to end well is it?
Welcome to your new normal.
This is the part where lawyers and legal professors explain to you why it is good for you to be spied upon by your government and their social media (private) companies.
Adjust your habits, online comments, and political party memberships accordingly. (Plus, if you are discussing with your spouse about a crime and express the opinion that the thug responsible should be hung by the neck until dead, be sure to step outside and whisper quietly).
Welcome to your new normal! Isn't it grand?
This is silly ... humans barely have the ability to properly discern appropriate and inappropriate reasons to report behavior (and often get it woefully wrong!).
AI's are certainly *far* from that capability.
Let's have AI monitoring politicians 24/7, watching for backroom deals in negotiations, business "opportunities", anything that lets a congressman amass tens of millions of dollars in a few decades at an average of $100k.
Are these online services licensed to practice medicine in the state or country they are used in?
Is a non professional obliged to make Tarasoff notifications?
Those seem like very appropriate questions. As AI does nothing if it is not trained, it would seem that the trainer would need to have a license to practice wherever reporting is mandatory.
Even if the government doesn't explicitly impose such a legal duty, be sure they will informally apply such pressure to do it that basically no manufacturer will refuse.
I seem to recall a recent law which will require auto manufacturer to implement some mechanism to prevent impaired people form driving a car.
ISTM that the more likely way for this to happen is through insurance companies.
Your car reports to the insurer that you frequently speed, say, so they adjust your rates. Now the police seek the records from the insurer, and issue tickets for all those times.
Imagine your refrigerator informing your health insurance company that you seem to have a lot of beef and ice cream, and damn few vegetables, in there.
The people wealthy enough to afford a lot of beef will also be able to afford 'artisanal' refrigerators, that understand the meaning of discretion.
Better yet, the expensive refrigerator encodes pork chops as kale, butter as tofu, etc.
Oh, I like that suggestion.
What would really be great is if we could get our bodies to encode food that way.
How about some research in that direction, Don? Something useful.
This has started on an opt-in basis. Car insurance doesn't care if you speed because speeding is not a good predictor of claims. The company is watching for high-g maneuvers, mainly hard braking, which suggest you nearly hit something.
(Speeding tickets, as opposed to speeding, may be a predictor of claims. What did you do to catch the cop's attention?)
The statute of limitations for speeding is likely to be a year so the threat of retroactive tickets is real.
What if these AIs don't have amygdalas? What if they were born with superego lacunae? I worry about this.
I'm confident from the context that you actually mean, Prof. Volokh, "Should the makers of your 'smart' devices and AI apps have a legal duty to make them so they'll report on you?"
This may seem to be a niggling point. But we're unlikely to see "Jane Doe vs. Apple Smartwatch Serial No. NX8-30128" — an in rem proceeding, I presume — anytime soon.
Ah, I see you've addressed this already in a comment above. Mea culpa for commenting without reading the prior comments.
I don't read the cited article as making the claim that Eugene says it makes. The title of Eugene's post is "Will your smart devices and AI apps have a legal duty to report on you?" and Eugene says the author answers that question with a "yes." But, the author of the article clearly writes, "This paper argues for state legislatures to establish a three-part duty to be upheld by marketers attempting to capitalize on AI-enabled psychotherapy apps."
The author does not make the claim that AI psychotherapy apps will have a duty to report based on judicial precedent. Rather the author argues that these apps should have a duty to report, and that said duty should arise from legislative action. The three-part duty that the author puts forward is premised on the rationale of Tarasoff as well as the principles underlying other areas of tort law (strict products liability, ordinary negligence liability, and respondeat superior liability.
I think the title of this blog post, and the claim that the author answers yes to said title, misunderstands the author's position. Perhaps a more accurate title would be "Should your smart devices and AI apps have a legal duty to report on you?" To which the author does answer "yes."
I just realized that Eugene does say the author answers "yes" to the title of article (which says "should") as opposed to the title of Eugene's post.
EV likes provocative headlines. Strict adherence to literal content gets treated as optional. I suppose that could be a way to express contempt for what he takes to be the lax standards of journalists.
Can we change gears and talk for a moment about the advisability of mentally disturbed individuals being treated by robots?
"Robots are inside my head" used to be a symptom, not a course of treatment...
FJB
Seems to me that the advisability of that is entirely dependent on how good the robots are. There's no theoretical reason they couldn't be better than human therapists.
Ultimately, you have a 3rd amendment problem. The state, by proclaiming a reporting duty and enlisting agents you have to feed, nonhuman though they are, is engaging in prohibited conduct. 3rd amendment cases will finally no longer be thin on the ground given current trends.
I most definitely do not Black the Blue not support the Patriot Act.
Queenie, Honey. You are so smart and so funny.
I don’t get exposed t Behar, only when reason logs me out.
But his paranoia is amusing, if the ChiComs really are watching intercepts of my wife and I having sex, then it’s probably in the elites morning briefing, along with medical warnings about trying to emulate me.
Then again I live off the grid, so any surveillance or intercepts are unlikely so they can’t get premium content like that for free off the internet.
Kaz. Ask your wife to say to you, I have a headache. See if she is not stalked by Tylenol ads on the internet.
I watched a porn I downloaded years ago for the first time in a long time. It had Brandi Love in it. 2 days later CNN paid partner advertising had an article, "What is Brandi Love up to lately?" with her picture.
Is Windows Media Player sending hash values of everything I watch back to central command?
I was talking with a co-worker yesterday about his Invicta watch. Today I have ad banners with Invicta watches. Is this how Skynet starts?