Social media giant Facebook has announced plans to remove terrorism content from its site using artificial intelligence (AI).
“We want to find terrorist content immediately, before people in our community have seen it,” Facebook told the BBC on Thursday.
Software will be used to identify whether a photo or video upload matches a known photo or video from terrorist groups including Islamic State, Al Qaeda and their affiliates.
Once an account has been removed for posting terrorist content, algorithms can search the social network for any users connected with that account.
Monika Bickert, director of global policy management, and Brian Fishman, counter terrorism policy manager in a blog post published on Thursday titled Hard Questions, wrote:
We agree with those who say that social media should not be a place where terrorists have a voice.
The move comes amid criticism that web giants like Facebook were failing to do enough to combat online terrorist activity.
Following attacks in London and Manchester in the past four months, British prime minister Theresa May and other G7 leaders urged social media platforms to take additional steps against extremist content.
“More than half the accounts we remove for terrorism are accounts we find ourselves, that is something that we want to let our community know so they understand we are really committed to making Facebook a hostile environment for terrorists,” Bickert told Reuters.
In the last year, Facebook has increased its team of counter terrorism experts to over 150 people who collectively speak almost 30 languages.