The Volokh Conspiracy
Mostly law professors | Sometimes contrarian | Often libertarian | Always independent
Should Facebook Have a Duty to Report Us to the Police for Felonies Potentially Revealed in Our Posts?
An Ohio judge suggests the answer should be "yes," and an Ohio statute seems to require that when Facebook employees learn of specific felonies revealed by posts that they might be monitoring for some reason.
From Godwin v. Facebook, Inc., decided yesterday by the Ohio Court of Appeals (Judge Sean C. Gallagher, joined by Judge Mary J. Boyle):
Robert Godwin, Sr., Godwin's father, was murdered by Steve Stephens — a video of the murder was briefly posted to Stephens's social media account, part of the social media network that is owned and managed by Facebook, Inc. Stephens committed suicide two days later. Godwin filed a wrongful death action against Stephens's estate, all the while maintaining that the estate is merely a "nominal defendant" in the action. In addition, Godwin included allegations against Facebook for its alleged failure to warn Robert Godwin of Stephens's intention, of which Facebook should have been aware based on a statement Stephens posted before the attack and based on Facebook's in depth and financially motivated use of its users' information. On the day of the tragic events, Stephens posted an ominous, but relatively ambiguous, statement on his social media account. In that message, Stephens stated:
FB my life for the pass year has really been fuck up!!! lost everything ever had due to gambling at the Cleveland Jack casino and Erie casino…I not going to go into details but I'm at my breaking point I'm really on some murder shit…FB you have 4 minutes to tell me why I shouldn't be on deathrow!!!! dead serious #teamdeathrow.
"Minutes" later, Stephens randomly approached Robert Godwin, who was sitting in a local park. Stephens pulled out a handgun and shot him after a brief dialogue.
Plaintiff sued Facebook, on the theories that
- Facebook should have known about Stephens' dangerousness and should have warned the authorities (as a matter of the common law of negligence) and
- Facebook did know about the specific threat, and Ohio law imposes a duty to report known felonies (such as terroristic threats) to the police: Ohio Rev. Code 2921.22 requires any "person [who] know[s] that a felony has been or is being committed" to "report such information to law enforcement authorities," and Ohio Rev. Code 2307.60 provides that "Anyone injured in person or property by a criminal act has, and may recover full damages in, a civil action unless specifically excepted by law."
The panel rejected the negligence claim on the grounds that negligence law generally doesn't impose affirmative duties on one party to protect another. There are some exceptions, such as when the defendant has a special relationship with a dangerous person, for instance when the defendant is a psychotherapist who learns of a specific threat by a patient; but this exception doesn't apply to social media platforms and their users.
The panel also rejected the statutory claim (see here for more on such duty-to-report laws) on the grounds that Stephens's post wasn't a felony "terroristic threat":
Ohio. Rev. Code 2909.23, entitled "making terroristic threat" provides that "[n]o person shall threaten to commit or threaten to cause to be committed a specified offense when … [t]he person makes the threat with purpose to … [i]ntimidate or coerce a civilian population" and "as a result of the threat, the person causes a reasonable expectation or fear of the imminent commission of the specified offense." "Specified offense" is defined, in pertinent part, as a felony offense of violence ….
Godwin's allegations with respect to the statutory claim are limited to the conclusions that (1) the "Facebook Defendants were aware of statements made by Mr. Stephens which constituted threats that were made with the intent to intimidate or coerce a civilian population"; (2) "Mr. Stephens' threats caused a reasonable expectation of the imminent commission of making terroristic threats"; and (3) the "Facebook Defendants were aware that Mr. Stephens was engaged in the commission of a felony." The single factual allegation related to Stephens's statement was that "Steve Stephens, had engaged in criminal conduct by making intimidating and coercive threats of violence." …
Godwin is solely relying on Stephens's statement to demonstrate that a "making terroristic threat" crime was committed against the civilian population, but there are no factual allegations demonstrating that Stephens intended to intimidate the civilian population and as a result of that attempt to intimidate, the civilian population had a reasonable expectation of fear that Stephens would commit a "specified offense." …
Godwin continually alludes to the fact that the basis of the "making terroristic threat" crime is Stephens's intent "to do some murder shit"—a proposition that is not at all self-evident from the actual phrasing of Stephens's statement, but when considered in the context of Godwin's allegations, the threat to commit murder, at a minimum, constitutes a "specified offense." However, there are no factual allegations, or even legal conclusions for that matter, that the civilian population had a reasonable expectation that Stephens intended to commit murder before Stephens committed the heinous act….
There are no allegations that Stephens had a criminal history known to the public, that he was a known terrorist who committed terrorist acts in the past, that any particular civilian in the Cleveland area even saw the post before the murder occurred, or that any person reasonably believed Stephens would imminently commit murder, nor is there any other factual allegation upon which it could be concluded that the message could reasonably cause the public to fear the imminent commission of a "specified offense." …
Judge Patricia Ann Blackmon concurred with a separate opinion:
I concur with the majority opinion and write separately to express my concern over the lack of developing law governing the relationship social media companies have with their users and the general public; the scope of duty social media companies may, or may not, owe to their users; and whether public safety outweighs a company's bottom line. First, it appears to me that there is a duty…. The law of torts is elastic, and the concept of "duty," as related to liability in torts, expanded as society advanced….
Traditionally, a duty of care did not include a duty to protect third parties. However, looking through the lens of foreseeability, if the defendant has a "special relationship" with either the bad actor or the person in danger, a legal duty may arise…. Public policy and public opinion shape the concept of what constitutes a special relationship in terms of imposing a legal duty….
The Ohio Supreme Court has held that "[s]uch a 'special relation' exists when one takes charge of a person whom he knows or should know is likely to cause bodily harm to others if not controlled." Turning to the case at hand, the extent to which social media companies "take charge" of their users is unknown at this time. However, as the oft-quoted saying goes, "negligence is in the air." See Dipayan Ghosh and Ben Scott, Facebook's New Controversy Shows How Easily Online Political Ads Can Manipulate You, Time Magazine (Mar. 19, 2018) ("The real story is about how personal data from social media is being used by companies to manipulate voters and distort democratic discourse"); Robert Creamer, Massive Facebook influence on public opinion makes its ad policy a serious election threat, USA Today (Jan. 22, 2020) (Facebook "has massive monopoly power to influence public opinion").
In fact, social media is becoming so influential that being a social media influencer is now a profession. Recently, Mark Zuckerberg, who is Facebook's CEO, admitted that "Facebook made a mistake in not removing a militia group's page earlier this week that called for armed civilians to enter Kenosha, Wisconsin, amid violent protests after police shot Jacob Blake …."
As a matter of policy, public safety should be of primary concern, which is why we have tort law. I truly do not see Facebook's issue. It had information of a potential crime. By acting it might have saved a life. Of course, we will never know, but that is why we give individuals their day in court.
I fully agree with the majority opinion that "[t]his case arises from disturbing facts [stemming] from the senseless murder of Robert Godwin, Sr., …." Although the law may not be ready to hold Facebook accountable in the instant case, from a moral point of view, it is hard to ignore "the principle that for every wrong a remedy must exist …." Only when legal and moral duty diverge can courts hear a call for movement and reform.
For more on how the law of negligence has sometimes been read as requiring third parties (on pain of liability) to do things that undermine others' privacy, see my Tort Law vs. Privacy (Colum. L. Rev. 2014).
Editor's Note: We invite comments and request that they be civil and on-topic. We do not moderate or assume any responsibility for comments, which are owned by the readers who post them. Comments do not represent the views of Reason.com or Reason Foundation. We reserve the right to delete any comment for any reason at any time. Comments may only be edited within 5 minutes of posting. Report abuses.
Please
to post comments
Professor Volokh, in your monograph (page 900), you stated:
But modern technology makes it possible to deter many misuses, especially of cars, simply by automatically reporting likely misuse to the police. Modern cars already have computerized control systems, and the cars are expensive enough that the new technology would add comparatively little to the cost, without stripping the product of valuable features—at least if one counts only those features that are used legally.
Why is Ok to have tort liability here [if I understood your correctly] with cars, but not FB?
Commenter_XY: 1. I give the car example as an illustration of how negligence rules might be read; but I actually suggest that this is a reason for limiting the negligence rules:
2. There's also an important distinction between the car example and the Facebook case. In the car example, the plaintiff's injury is factually caused in part by the car manufacturer's making and selling a deadly device (coupled, of course, with the driver's foreseeable misuse of the device). Under normal negligence law, one could argue that the manufacturer should take reasonable care to minimize the risk that its products will cause such damage.
In the Facebook example, the murder of Robert Godwin wasn't caused, even in part, by Facebook or the features it provides. The claim is only that Facebook might have, by reporting Stephens' post to the police, helped prevent Stephens' crime. Tort law generally doesn't require entities to take reasonable care to minimize the risk that third parties will cause damage without using the entities' products and services.
Professor, thank you so much for the detailed answer.
My guess is that with more sophisticated and implementable AI algorithms, how we think about negligence limits wrt social media will change. The analogy I would suggest to you is how data privacy laws have evolved in the last decade to address digital data [ala GDPR].
"with more sophisticated and implementable AI algorithms, how we think about negligence limits wrt social media will change."
While we're thinking about those new algorithms and liability, what is Facebook's liability when their AI misidentifies an innocuous comment by Commenter_XY, and the resulting raid results in Commenter_XY's spouse getting killed by a stray bullet after the police open fire on the family dog?
Not trying to be glib, but there is a pretty big Type 1/Type 2 error issue with Facebook (or anyone else) trying to do the pre-crime thing.
That paragraph is speculating on future developments in the area of negligence law dealing with product design. The car manufacturer would be liable for injuries caused by the car because they failed to take reasonable steps to prevent a drunk driver or a speeder from misusing a dangerous machine. Godwin though doesn't claim negligent design, there is no assertion that Facebook is dangerous and can cause injury when misused.
I need to type faster.
"50 years down the road, here's your monitoring implanted chip to record all you do, not just for after-the-fact analysis for crimes, but AI analyzing your actions and speech for pre-crime or crime in progress for immediate reporting.
"It also uses AI to watch for problematic speech and report that for cancellation, too.
"Resistance is futile."
People think, "My mind would be strong enough to resist the borg implants." But what if you were kept in line by threat of instant cancellation? The occasional nascent drone who somehow has something go wrong and is not enthralled is immediately killed as defective.
The quote says the murder occurred "Minutes" after the post. I'd think that would matter.
Even if there was some kind of duty based on that extremely vague threat, if the murder occurred only minutes after the post, what exactly did they think could be done? FB doesn't monitor posting in real-time. They wouldn't have even seen the post until after the murder was committed.
Indeed. The only way Facebook would have been able to do anything would have some automatic scanning of all posts for some sort of keywords or maybe even more advanced sentiment analysis of some sort and automatically forward those messages to the police.
Hopefully most people agree that would be a bad idea.
I imagine Facebook does monitor posts real time, ie, at the moment they are created. What better time to analyze posts than the present? Especially if the purpose is to target ads, why wait?
But I'd bet a paycheck they just check for monetizable keywords, like "shop", "jeans", etc. Analyzing real grammar is beyond what computers can do, and if the analyzer alerts humans for every questionable word, it would flood them with false positives and be useless. I bet "murder" occurs thousands and maybe millions of times harmlessly for every single true warning. People discuss TV shows and movies and books. They discuss crime statistics. "I'd murder for a hamburger right now." "You're killing me!"
Look at how often Siri, Alexa, and Google Assistant get simple things wrong. There is no way they could ever fathom which rare instances of "murder" are meaningful precursors to crime.
As I understand it they do quite a bit of real time monitoring; For example, they maintain a table of prohibited graphics and links, and block or obscure posts that use them. But, new graphics/links only end up on the list after being reported.
It's possible they do a certain amount of human checking in on a random sample of posts, too; Big Brother can't be looking out through the telescreen all the time, but you never know when he might do it. I'm pretty sure there are words and phrases that would trigger this.
But, yes, AI really is not up to real time screening for indications that a crime is impending.
You imagine Clippy showing up on your FB screen: "You've just made a terroristic threat and/or implied you mean to murder someone; Would you like to tag your intended victim?"
So I was suspended from Facebook twice between July and September. The second time was for saying, when a friend said something like he was going to feed me durian (IIRC), that I would stab him. I can understand how they'd catch and ban for that, but I made the post in August and wasn't suspended until the end of September.
The other time was for writing "girls are weird" in a discussion of parenting. That one they caught very quickly. And yes, they will suspend and ban you for that.
Yes indeed -- but the court's reasoning didn't turn on that, perhaps because the court had rather broader reasons for rejecting plaintiff's claims.
EV,
The trial court did an opinion that is on the county docket that briefly mentioned the practical issue of imposing a duty on Facebook and how it would at best force them to issue vague warnings to law enforcement.
It also touched on some interesting personal jurisdiction and Section 230 issues that the appeals court didn't address.
I'll bet they didn't ask anyone in law enforcement if they were interested in a new and inexhaustible supply of false alarms.
From the decision:
"On the day of the tragic events, Stephens posted an ominous, but relatively ambiguous, statement on his social media account.
...
“Minutes” later, Stephens randomly approached Robert Godwin, who was sitting in a local park. Stephens pulled out a handgun and shot him after a brief dialogue."
Did I miss something? The objection is that facebook didn't notice the post and decide to call the police, and make the call, in the "minutes" between the post and the murder?
Even if Facebook had somehow *instantly* forwarded this to the police, would the police even have known where to go to prevent this?
So if Facebook fails to report Nazis, are they violating "Godwin's Law"?
From what I hear, in the U.K. the police quickly and thoroughly respond to any reports of "hate speech." (And they define "hate speech" rather broadly.) It gives liberals no end of grief that this approach cannot (yet) be implemented here.
If the govt had been real-time monitoring social media then the burden would have been on them.
(Sorry for the heart attach you Libertarians.)
Sorry, aren't ANY libertarians who would attach their heart to your agenda.
My first question with something like this is how many messages of this sort (apparent declaration of violent intent) are posted on Facebook which are either meant as humor (as seen by an extended context), letting off steam, or are never acted on?
For every case where somebody really goes out and commits a crime there are likely hundreds of cases where nothing happens.
Perhaps there is a benefit if we expect Facebook to call the police whenever a threatening post appears. Traffic stops ought to just about disappear since they won't have time to do anything other than chase down random threats. 🙂
If the post includes a name, or a name can be inferred, then that seems like a reasonable trigger to warn the person under threat. Then the person under threat can decide what to do.
In the post in this scenario, Facebook is specifically addressed. It would seem then that Facebook would be at least morally obligated to call the police.
It would be interesting to look at the data associated with crime and social media activity overall. Almost certainly classified research on this question is being done.
The statement that he feels like “some murder shit” and he “ought to be on death row” is hardly a statement that he about to murder someone. It is a statement about his feelings, not his intentions. It makes no reference to any other individuals. Nor does it suggest his feelings relate to the future as opposed to the past.
If facebook called the police on this evidence, this could easily be considered a false police report.
Requiring Facebook to call the police on mere evidence that someone feels desperate and terrible would have the effect of preventing depressed people from using facebook without constant calls to the police and resulting police harassment. This would violate the ADA.
I'm having trouble imaging why someone would in effect say "Your move Facebook" without intent to do something.
He was literally asking for it.
The problem with the idea that “for every wrongs there must be a remedy” is it ignores the fact that our ability to discern what happened is ddecidedly imperfect, full of false positives and negatives. Wrongs have to pass a threshold of both seriousness and evidence before a remedy is warranted. This does not pass the necessary evidence threshold. Separate from First Amendment considerations, separate from Millenium Copyright Act immunity considerations, allowing a remedy on this level of evidence would result in a torrent of false positives as Facebook deluges the police with reports of upset people, in a desperate and likely futile effort to avoid liability.
Judge Blackmon scares me.
Were the Facebook employees who knew of this "threat" located in Ohio?
If not, how can Ohio law impose a duty to report on them?
If Facebook had a legal duty to report this to police because of Ohio law, so did every single person who saw it.
Donald Trump has 87.2 million Twitter followers. If he said something that could be taken as a threat while on a trip to Ohio, then apparently all 87.2 million would be in violation of Ohio law if they did not immediately call the police to report it.