There are no active ads.


Facebook creates an AI system that aims to prevent suicides

by Ron Duwell | March 3, 2017March 3, 2017 2:00 pm PDT

Facebook’s latest endeavor is creating a new AI system that will recognize and report suicidal behavior that appears in people’s posts. After several high-profile suicides were live streamed through the site, CEO Mark Zuckerberg set about a way to prevent suicides from happening through his website.

The new AI works by scanning posts and comparing them to those that were legitimately linked to suicidal behavior in the past. If certain posts are deemed serious or life-threatening enough, they will be passed on to a newly formed community¬†for review. Generally, this will occur only when the situation is “deemed urgent.”

Product Manager Vanessa Callison-Burch explained the new system in an interview with BuzzFeed News.

The AI is actually more accurate than the reports that we get from people that are flagged as suicide and self injury. The people who have posted that content [that AI reports] are more likely to be sent resources of support versus people reporting to us.

Dr. John Draper, Project Director for the National Suicide Prevention Lifeline, admits that the system is not perfect, but every little bit helps when it comes to suicide prevention.

If a person is in the process of hurting themselves and this is a way to get to them faster, all the better. In suicide prevention, sometimes timing is everything.

In addition to the new AI, Facebook is also creating a system that will provide direct links to a chatroom or hotline for suicide prevention while a person who is at risk is broadcasting.

Buzzfeed News

Ron Duwell

Ron has been living it up in Japan for the last decade, and he has no intention of leaving this technical wonderland any time soon. When he's not...