Facebook Rolls Out New Features to Prevent Live-Streaming Suicides

Social media giant uses artificial intelligence to help at-risk users

Facebook is making a substantial effort to prevent self-harm on its site.

Facebook is making a substantial effort to prevent self-harm on its site. Chris Jackson/Getty Images

In response to the alarming trend of users taking their life on Facebook Live, Facebook announced early Wednesday morning that they will be expanding their tools and resources to better help those at risk of suicide, as well as the family and friends of suicide victims.

The features announced on Wednesday are similar to what Facebook rolled out in 2015, which allowed users to flag potentially disturbing posts and photos to alert the company of risk. Now these features have been added to Facebook Live, further connecting those at risk with someone from one of their partner organizations like the National Suicide Prevention Lifeline, the National Eating Disorder Association and the Crisis Text Line.

When a video has been flagged and Facebook determines that the user may need help, that user will receive real-time options and resources for suicide prevention—while they are still on the air. Additionally, the person who reported the video will be given resources to reach out and talk to their friend.

In a blog post, Facebook explains the features:

“Today we’re updating the tools and resources we offer to people who may be thinking of suicide, as well as the support we offer to their concerned friends and family members:

  • Integrated suicide prevention tools to help people in real time on Facebook Live
  • Live chat support from crisis support organizations through Messenger
  • Streamlined reporting for suicide, assisted by artificial intelligence”

This announcement comes after a poignant manifesto Mark Zuckerberg wrote that highlights his awareness of the concerning trend. “[T]here have been terribly tragic events—like suicides, some live streamed—that perhaps could have been prevented if someone had realized what was happening and reported them sooner.”

To identify those at risk quicker, they plan to test their artificial intelligence and data science capabilities to find patterns between flagged posts, predicting users that may be at more susceptible to self-harm.

The goal of the new rollouts, according to the Menlo Park behemoth, is to connect people in distress with individuals who can help. However, there may be a deeper reasoning behind the rollout that stretches beyond the person at risk.

Since the technology became available, there have been seven reported live-streamed suicides, not all on Facebook, according to Dan Reidenberg, the executive director of Save.org. At a macro level, there is one death by suicide in the world every 40 seconds, and suicide is the second leading cause of death for 15-29-year-olds. According to the National Center for Health Statistics, suicide has surged to the highest level in nearly three decades.

Even though the seven deaths are deeply troubling, they pale in comparison to the universal numbers, so why is Facebook so concerned?

According to experts, a reason for the heightened concern from Facebook—and other platforms where virality is common—is the possibility of suicide contagion. “If someone is exposed to the suicide attempt or death of a friend, it increases that person’s risk of suicidal thoughts and attempts,” the authors explained in an article written about contagion for The Conversation.

The scope of the problem at hand stretches far beyond the single individual at risk, substantially affecting anyone who comes into contact with the video or the post. And since we as humans are drawn to negative footage, the virality of these videos is massive. In the case of a live stream recording a suicide, the contagion could be catastrophic—far beyond anyone’s ability to accurately predict.

Interestingly, though, Facebook does not shut down the live stream when someone is flagged as being at-risk, even after their internal community has confirmed it. The reasoning is that they want to increase the chance of the user accepting help, but also they want to prevent censorship violations. This explanation seems plausible, but falls short, in my opinion. I believe Facebook could do more to prevent videos and live streams from reaching “scale” by stopping the videos from being viewed or shared by others as soon as the video is confirmed as hazardous material.

Public suicide is not a new phenomenon, but with the advent of live streaming and social media, anyone has a platform to show their wounds to thousands, or millions, of viewers. Luckily Facebook, along with 70 partners, is poised “to prevent[ing] harm…build[ing] social infrastructure to help our community identify problems before they happen,” according to Zuckerberg.

I applaud Facebook for their substantial efforts to prevent self-harm in spite of no legal obligation to do so. They are attempting to use their scale to implement measures to counteract the rising problem of suicide and help those in need, and for that I am grateful. As someone who has lost close friends to suicide, I intimately know how important it is to have support for both the victims and their families, and I’m happy to see that Facebook is taking action.

Benjamin is the founder of Fully Rich Life, a blog that is focused on helping men decrease stress and anxiety, find more focus, and be more present. Benjamin also helps businesses tell better stories with authentic content strategies. Join thousands of readers in his free 21 Day Mindfulness Challenge.