The Volokh Conspiracy

Mostly law professors | Sometimes contrarian | Often libertarian | Always independent

Privacy

Will Your "Smart" Devices and AI Apps Have a Legal Duty to Report on You?

|

I just ran across an interesting article, "Should AI Psychotherapy App Marketers Have a Tarasoff Duty?," which answers the question in its title "yes": Just as human psychotherapists in most states have a legal obligation to warn potential victims of a patient if the patient says something that suggests a plan to harm the victim (that's the Tarasoff duty, so named after a 1976 California Supreme Court case), so AI programs being used by the patient must do the same.

It's a legally plausible argument—given that the duty has been recognized as a matter of state common law, a court could plausibly interpret it as applying to AI psychotherapists as well as to other psychotherapists—but it seems to me to highlight a broader question:

To what extent will various "smart" products, whether apps or cars or Alexas or various Internet-of-Things devices, be mandated to monitor and report potentially dangerous behavior by their users (or even by their ostensible "owners")?

To be sure, the Tarasoff duty is somewhat unusual in being a duty that is triggered even in the absence of the defendant's affirmative contribution to the harm. Normally, a psychotherapist wouldn't have a duty to prevent harm caused by his patient, just as you don't have a duty to prevent harm caused by your friends or adult family members; Tarasoff was a considerable step beyond the traditional tort law rules, though one that many states have indeed taken. Indeed, I'm skeptical about Tarasoff, though most judges that have considered the matter don't share my skepticism.

But it is well-established in tort law that people have a legal duty to take reasonable care when they do something that might affirmatively help someone do something harmful (that's the basis for legal claims, for instance, for negligent entrustment, negligent hiring, and the like). Thus, for instance, a car manufacturer's provision of a car to a driver does affirmatively contribute to the harm caused when the driver drives recklessly.

Does that mean that modern (non-self-driving) cars must—just as a matter of the common law of torts—report to the police, for instance, when the driver appears to be driving erratically in ways that are indicative of likely drunkenness? Should Alexa or Google report on information requests that seem like they might be aimed at figuring out ways to harm someone?

To be sure, perhaps there shouldn't be such a duty, for reasons of privacy or, more specifically, the right not to have products that one has bought or is using surveil and report on you. But if so, then there might need to be work done, by legislatures or by courts, to prevent existing tort law principles from pressuring manufacturers to engage in such surveillance and reporting.

I've been thinking about this ever since my Tort Law vs. Privacy article, but it seems to me that the recent surge of smart devices will make these issues come up even more.