“It’s hard to know what Facebook is actually picking up on, what they are actually acting on, and are they giving the appropriate response to the appropriate risk,” said Dr. John Torous, director of the digital psychiatry division at Beth Israel Deaconess Medical Center in Boston. “It’s black box medicine.”
Facebook said it worked with suicide prevention experts to develop a comprehensive program to quickly connect users in distress with friends and send them contact information for help lines. It said experts also helped train dedicated Facebook teams, who have experience in law enforcement and crisis response, to review the most urgent cases. Those reviewers contact emergency services only in a minority of cases, when users appear at imminent risk of serious self-harm, the company said.
“While our efforts are not perfect, we have decided to err on the side of providing people who need help with resources as soon as possible,” Emily Cain, a Facebook spokeswoman, said in a statement.
[Share an experience about Facebook’s suicide prevention program with The New York Times.]
In a September post, Facebook described how it had developed a pattern recognition system to automatically score certain user posts and comments for likelihood of suicidal thoughts. The system automatically escalates high-scoring posts, as well as posts submitted by concerned users, to specially trained reviewers.
“Facebook has always been way ahead of the pack,” said John Draper, director of the National Suicide Prevention Lifeline, “not only in suicide prevention, but in taking an extra step toward innovation and engaging us with really intelligent and forward-thinking approaches.” (Vibrant Emotional Health, the nonprofit group administering the Lifeline, has advised and received funding from Facebook.)
Facebook said its suicide risk scoring system worked worldwide in English, Spanish, Portuguese and Arabic — except for in the European Union, where data protection laws restrict the collection of personal details like health information. There is no way of opting out, short of not posting on, or deleting, your Facebook account.
A review of four police reports, obtained by The Times under Freedom of Information Act requests, suggests that Facebook’s approach has had mixed results. Except for the Ohio case, police departments redacted the names of the people flagged by Facebook.