Social media is often criticised as being a hotbed of extremist content and a way for terrorists to spread their message online.

From apps like WhatsApp which allow terrorists to talk to one another undetected, to the picture sharing site being used to promote extremist imagery, controlling this content is a headache for technology companies.

Here are the ways companies are trying to stop terrorist material spreading online.

1. Removing accounts

Twitter has reportedly shut down over 636,000 accounts on its network for promoting advocating political or religious violence since August 2015.

Around 74 percent of these accounts suspended for promoting terror were found by Twitter’s internal “proprietary spam-fighting tools.”

However, the micro-blogging platform has come under fire from the UK government after it reportedly blocked its anti-terror monitoring tactics.

How well do you really know your competitors?

Access the most comprehensive Company Profiles on the market, powered by GlobalData. Save hours of research. Gain competitive edge.

Company Profile – free sample

Thank you!

Your download email will arrive shortly

Not ready to buy yet? Download a free sample

We are confident about the unique quality of our Company Profiles. However, we want you to make the most beneficial decision for your business, so we offer a free sample that you can download by submitting the below form

By GlobalData
Visit our Privacy Policy for more information about our services, how we may use, process and share your personal data, including information of your rights in respect of your personal data and how you can unsubscribe from future marketing communications. Our services are intended for corporate subscribers and you warrant that the email address submitted is your corporate email address.

2. Using AI to spot content

Google, Facebook and Twitter have all pledged to use artificial intelligence (AI) technology to spot and take down terrorist content faster online. The benefit of using AI is that it learns as it is working, making it a faster tool to pinpoint and remove extremist content.

Twitter has teamed up with IBM’s AI supercomputer Watson to track abusive messages online.

As well, Google has been using machine learning video analysis models to find and assess extremist videos. The company said the models had helped to find and assess more than 50 percent of the terrorism-related content it has removed over the past six months.

Facebook recently announced it was using AI software to identify whether photos and videos on the platform matched known content produced from terrorist groups including Islamic State and Al Qaeda.

3. Improving counter radicalisation efforts

As well as taking extremist videos down, social media networks are working on counter radicalisation efforts to prevent young people becoming radicalised online.

Google’s subsidiary Jigsaw is doing this in a unique way through its “Redirect method”.

It uses targeted advertising to reach potential ISIS recruits online and redirects them towards anti-terror videos that attempt to change their minds.

Google has said that through the system, potential recruits have clicked through on the ads and watched over half a million minutes of video content debunking terrorist recruitment efforts.

4. Creating a global network to prevent the spread of content

Earlier this year, Microsoft, Google, Facebook and Twitter joined together to create the Global internet Forum to Counter Terrorism (GIFCT).

Together, the companies want to “disrupt” terrorists and their ability to use social media to spread messages and extremism.

They said:

“We believe that by working together, sharing the best technological and operational elements of our individual efforts, we can have a greater impact on the threat of terrorist content online.”

The first meeting of the forum is taking place this week in San Francisco and will see tech companies, NGOs and representatives from governments coming together to discuss ways to prevent propaganda spreading online.