Facebook has been criticised for a lot this year, no less its role in spreading disinformation from Russian ads and accounts. But now there is some good news for the global social network.
Facebook is going to be expanding its artificial intelligence (AI) software to detect and prevent potential suicides across the world.
The links between social media and depression are currently a major topic of research. Last year, scientists from Harvard University and the University of Vermont created a machine learning algorithm to spot signs of depression using people’s Instagram feeds.
It was found that Instagram users with depression were more likely to post images with darker filters, in blue and grey colours, and receive fewer likes than those posted by users without depression.
How will the Facebook AI suicide detection work?
Facebook is going to be using its AI technology to scan the text of posts and comments on the platform. The software will be looking for phrases that could be indicative of suicidal tendencies. Certain phrases could be clues, such as people asking “Are you ok?”.
If the AI sees a phrase that could alert to a potential suicide, an alarm will be raised to a specialist team of employees at the company. Facebook can then make suggestions to the person of resources such as a telephone helpline.
In certain cases, Facebook workers may sometimes call local authorities to alert them to the situation.
The social media giant has been trialling the software in the US for the past few months. It found that after the AI detected suicidal intent, first responders checked on people more than 100 times afterward.
The software will begin rolling out to other countries, however, this will not include the European Union. Facebook’s vice president for product management, Guy Rosen, said this was due to “sensitivities”.
Verdict has contacted Facebook for more information and will update this article.
Earlier this year, it was revealed that Facebook was gathering psychological insights on teenagers, such as when they felt “insecure” or “worthless” and selling it to advertisers. This caused a lot of backlash against the company for manipulating the emotions of young people. It’s good to see the company using its wealth of information for good.