Facebook protects itself from risk by putting users in danger. Facebook moderators can't handle determinations like whether a King Cake baby counts as obscenity. Yet the social media giant has nonetheless appointed itself arbiter of your mental health. And if its bots don't like what they see, Facebook may report you to police.
As part of "suicide prevention efforts," Facebook "says it has helped first responders conduct thousands of wellness checks globally, based on reports received through its efforts," reports CNN. Antigone Davis, Facebook's global head of safety, told the station: "We are using technology to proactively detect content where someone might be expressing thoughts of suicide."
Since 2011, Facebook has allowed users to flag potential suicidal content; reports prompted emails from Facbook urging the poster to call the National Suicide Prevention Lifeline. But starting in 2017, Facebook introduced bots to search out and report potential suicidal content. The bots report suspected cries for help to human moderators, who may then "work with first responders, such as police departments to send help," says CNN.
That's right: Facebook might call the cops on you because a bot thought you seemed sad. Facebook executives think that if a user exhibits signs of depression, it's up to Facebook—not the user's friends, family, or community—to intervene.
And why not? By preemptively turning users over to authorities, Facebook saves its own butt. As with so many tech-company precautions that they try to frame as being for users' benefit and public safety, the real protected party here is Facebook itself, which doesn't want to face criticism and lawsuits if a user who commits suicide could be said to have hinted about it online first.
All it costs is putting people's lives in danger.
We've seen again and again and again how cops called in to deescalate mental health situations wind up hurting and killing those they've been called in to help. Cops are not trained psychiatric professionals. Cops are not equipped to talk people out of suicide, nor to assess whether their Facebook posts spell trouble. Cops are not equipped to judge mental health by showing up at someone's door. And cops are not going to overlook other issues, like drug possession, just because someone is having mental health issues.
Not that Facebook would care. It's also begun calling police on people for an array of potential situations, including apparently terms-of-service violations. "We may also supply law enforcement with information to help prevent or respond to fraud and other illegal activity, as well as violations of the Facebook Terms," the Facebook safety page says.
Mental health researchers are now challenging Facebook's efforts and asking about the ethical implications.
- The details of how federal agents used a fake school to entrap immigrants are getting even more disturbing. Read more about the whole sordid mess in the Detroit Free Press.
- Rapper 21 Savage was granted an immigration bond, after spending more than a week in custody. He's supposed to be released today. (More on this arrest here.)
- The national debt is now more than $22 trillion, a record high, having taken another turn upward in the wake of Trump's tax cuts.
- Kentucky police may have killed a kidnapping victim as they were attempting to save her.
- U.S. Senators will vote on the Green New Deal:
— Rachael Bade (@rachaelmbade) February 12, 2019
- Protecting and serving:
— threat level: beige (@TimCushing) February 13, 2019
- "These folks need to get a life. And they can FOIA that!"
— Robert Maguire (@RobertMaguire_) February 12, 2019
- Voiceprints are here:
Jails across the U.S. are extracting the "voice prints" of people presumed innocent. Defense attorneys say they were unaware of the practice and are unclear on how they can expunge the data of nonconvicted clients. by @georgejoseph94 & @DebbieNathan2 https://t.co/M8ye4bfVWl
— The Appeal (@theappeal) February 12, 2019