Facebook’s Oversight Board needs technologists to grapple with the growing use of artificial intelligence (AI) in content moderation, according to board member and former Guardian editor-in-chief Alan Rusbridger.
First announced in 2018, the independent body is designed to advise the social media giant on content moderation decisions. It’s made up of 20 members hailing from diverse fields including law, journalism, academia and human rights.
It officially started moderating in October 2020 and has so far overturned four moderation decisions made by Facebook.
For years Facebook has faced a barrage of criticism for the way it handles content moderation, most recently including Covid-19 misinformation. It has increasingly turned to AI to flag and remove harmful content at scale.
Rusbridger told the House of Lords Communications and Digital Committee that it was “essential” that Facebook was more transparent about these algorithms.
He added that the Facebook Oversight Board “needs more technological people onboard” to fully understand how AI is applied in content moderation.
“As a board, we’re going to have to get to grips with that, even if that takes many sessions with coders speaking very slowly,” he said.
He added that he was concerned by the increased use of machine learning – a subset of AI in which programs make decisions without explicit instructions – for moderation and that if left unchecked it posed a threat to free speech.
In November 2020 Facebook put machine learning in charge of its moderation queue, which sends the cases it deems most urgent to an army of 15,000 human moderators to sift through. Before that, moderators dealt with content moderation tickets chronologically.
Rusbridger spoke of a “tension” between the nuance of human decision making and the limitations of algorithms being able to understand the reasoning of the Oversight Board.
“I’ve no reason to think the machines won’t get better… but I think we need more technological people on board who can give us independent advice from Facebook. Because I think this is going to be a very difficult thing to understand how this artificial intelligence works.”
Kate Klonick, assistant professor of law at St John’s University, told the committee that there should “absolutely” be more transparency of Facebook’s moderation algorithms.
However, she pointed out that it’s easy to blame algorithms when it is ultimately people behind them.
“Algorithms are not one type of thing and also they are human, they are written by humans… they are formed by data that is generated by humans,” she said.
She gave the example of Facebook’s struggle to take down video footage of the Christchurch mosque shootings in 2019 because people manipulated the video to avoid the social network’s AI moderation policies.
“I’m not trying to make Facebook some hero in this story – there’s always something more that could be done,” she explained.
“But an algorithm is kind of like a rule, as soon as there’s a rule and you know what the rule is there is always going to be ways to break it or manipulate or twist it to your means.”
Companies such as Facebook have to deal with these “hard choices” all the time, she said, but “some aren’t making the right balance” in how they invest in dealing with it.
“We’re not there to please Facebook”
Rusbridger said that the Oversight Board will eventually ask to see Facebook’s moderation algorithm.
“Whether we can understand it when we see it that’s a different matter,” he said.
When asked if the salaries paid to Oversight Board member would create a conflict of interest when up for reappointment, Rusbridger said: “We’re not there to please Facebook. My experience of my colleagues is they are quite bolshie, they don’t want to have anything to do with Facebook.”
“We turfed Facebook out of our meetings when we realised some people were sitting there.”
He added that he is yet to meet Facebook chief Mark Zuckerberg or any of its other executives.