Fact-checking charity Full Fact has recommended that Facebook be more open about how it intends to use artificial intelligence (AI) to flag fake content on its platform.

UK-based Full Fact is one of a number of independent fact-checkers hired by Facebook to help tackle disinformation and fake news on its platform, from vaccine misinformation to whether a tampon can help someone who has been stabbed (it doesn’t).

The AI recommendation is one of ten made by Full Fact to the social media giant, which found Facebook’s Fact Checking programme overall to be “worthwhile”. In its first report after six months, Full Fact recommended that Facebook “be explicit about plans for machine learning”.

Machine learning, a subset of AI, involves computer programs being given examples of patterns in data to then recognise patterns by itself, ‘learning’ and improving its accuracy over time.

Technology companies with large user bases – Facebook has 2.41 billion monthly users – face the challenge of implementing fact-checking at scale, something that artificial intelligence is well positioned to address.

Facebook CEO Mark Zuckerberg has publicly said that he wants to see content flagged by AI in the future.

How well do you really know your competitors?

Access the most comprehensive Company Profiles on the market, powered by GlobalData. Save hours of research. Gain competitive edge.

Company Profile – free sample

Thank you!

Your download email will arrive shortly

Not ready to buy yet? Download a free sample

We are confident about the unique quality of our Company Profiles. However, we want you to make the most beneficial decision for your business, so we offer a free sample that you can download by submitting the below form

By GlobalData
Visit our Privacy Policy for more information about our services, how we may use, process and share your personal data, including information of your rights in respect of your personal data and how you can unsubscribe from future marketing communications. Our services are intended for corporate subscribers and you warrant that the email address submitted is your corporate email address.

However, machine learning is not considered to be advanced enough yet to understand the nuances of human social media posts.

What did Full Fact say about machine learning?

In its 45-page report, Full Fact said:

“These systems do not yet exist in any general sense. Creating these technologies involves solving some very hard problems, including ethical as well as technological problems. And attempts to do so need to be carefully scrutinised, which is one role Full Fact plays in this area.”

However, Full Fact recognises that AI can play a key role in “identifying content and patterns of inaccurate content that may lead to specific harms”.

“Effective and ethical technology could in time help to make human efforts to tackle specific harmful inaccurate information more effective by identifying and classifying it at scale,” the report states.

Currently, Facebook flags content – through a combination of algorithms and Facebook users flagging content – and then adds them to a queue for fact-checkers such as Full Fact to investigate.

Third-party fact-checkers then work their way through the list, researching potentially false stories and then tagging the post as one of nine categories, such as ‘false’, ‘true’ and ‘satire’.

Full Fact then attaches a link to their own article that explains the veracity of the Facebook post. Users can then see a disclaimer explaining why the post has been flagged.

In a number of posts, the Full Fact team struggled to find the right category for flagged content, something that a computer would struggle with even more.

Avoiding the “serious negative side effects” of AI

Full Fact recommended that the current categories used to flag content are too broad for AI to accurately learn from “without serious negative side effects”, given that machine learning depends on a large amount of good quality data to improve.

Four of Full Fact’s recommendations were to add new ratings to tag flagged content.

Full Fact also had difficulty in dealing with satirical posts, finding that “many people may have misunderstood [satirical posts] as being real”, such as one story joking that the BBC was adding Arabic subtitles to EastEnders. This level of nuance is another challenge that Facebook will have to overcome when implementing machine learning.

Full Fact said it welcomed clarity from Facebook on how it plans to use machine learning, but recognised the discussion might have to be in private to prevent the system from being exploited.

Full Fact is part of a group involving academics and big tech companies, such as Facebook, Google, Microsoft and Twitter, set up to explore the ways AI can be used to improve trustworthiness online.

Full Fact has also developed its own automated fact-checking technology, which won an award at Vodafone’s tech for good awards.

Facebook’s response

Other recommendations to Facebook included expanding Third Party Fact Checking to Instagram and for Facebook to share more data with fact-checking organisations about the impact of their work.

Full Fact said that Facebook has not interfered with the charity’s work in any way. The charity was paid $171,800 for its fact-checking work from January to June 2019.

Facebook said it welcomed the feedback from Full Fact:

“We are encouraged that many of the recommendations in the report are being actively pursued by our teams as part of continued dialogue with our partners, and we know there’s always room to improve.

“This includes scaling the impact of fact-checks through identical content matching and similarity detection, continuing to evolve our rating scale to account for a growing spectrum of types of misinformation, piloting ways to utilise fact-checkers’ signals on Instagram and more. We also agree that there’s a need to explore additional tactics for fighting false news at scale.”


Read more: Facebook FTC fine in numbers: How does it compare to past fines, and how will it impact the tech giant?