As countries around the world work to contain the spread of the COVID-19 pandemic, social media platforms are faced with an rapidly spreading problem of their own: misinformation.
From conspiracy theories that coronavirus is connected to the rollout of 5G, to false claims that cocaine protects against it, the World Health Organisation described this spread of fake news as an “infodemic”.
But how do social media platforms, which have faced substantial criticism in the past for their role in the spread of fake news and misinformation, respond to this unprecendented situation?
According to research by App Annie, mobile use in China increased 30% compared with 2019, and 11% in Italy. As people increasingly rely on devices for work and communication, they also risk exposure to misinformation surrounding the coronavirus outbreak.
Meanwhile, according to Sky News, a Facebook post containing claims that a coronavirus vaccine already exists was shared more than 500 times, highlighting the urgent need to monitor and control the proliferation of such content.
Yuval Ben-Itzhak, CEO, Socialbakers:
“Over the last few months, since the news of the coronavirus broke, social media platforms have been flooded with misinformation, exaggerations and conspiracy theories about COVID-19. Because of the scale of the issue – and because the platforms have wisely sent many of their human reviewers home to take social distance – the platforms, from YouTube to Twitter, have all announced that they will be leveraging AI in an effort to tackle the spread of misinformation.
“At the start of this week, Google, Facebook, Microsoft, Twitter, YouTube, Reddit, and LinkedIn issued a joint statement announcing that they would be combining efforts to tackle the problem of misinformation.”
Tackling the spread of coronavirus misinformation
In the wake of this, Facebook has begun to remove misleading or conspiratorial content, such as false cures and information regarding the spread of the virus, which have been debuncked by the WHO or other health organisations.
The platform has also banned ads that exploit the situation such as those selling medical masks or claiming to offer a cure for coronavirus. Facebook has also given the WHO access to free advertising.
On Instagram, hashtags that are being used to spread misinformation have been blocked or restricted.
Yesterday, WhatsApp announced the launch of the WhatsApp Coronavirus Information Hub in partnership with the World Health Organization, UNICEF, and UNDP, designed to provide “general tips and resources for users around the world to reduce the spread of rumors and connect with accurate health information”.
The State of Technology This Week
Baybars Orsek, Director of IFCN commented:
“The timely donation from WhatsApp will help the fact-checks published by the CoronaVirusFacts Alliance to reach wider audiences and, in consequence, help people sort facts from fiction during this avalanche of information that WHO called an ‘infodemic’.”
“The International Fact-Checking Network also looks forward to discovering ways to understand the spread of health related hoaxes on WhatsApp in different formats and to make tools available for fact-checkers to detect and debunk misinformation on the messaging app.”
A YouTube spokesperson told the BBC that it was committed to providing “timely and helpful information”.
Earlier this week, Twitter updated its safety policy to prohibit tweets that “could place people at a higher risk of transmitting COVID-19”, with misleading tweets, or those going against expert guidance removed from the site.
Jake Moore, Cybersecurity Specialist at ESET said:
“Once again, this is excellent advice from Twitter, and they are usually one of the front runners when it comes to trends like this that others will follow. Hopefully, if there is anything good that can come out of this situation, it will be that people start to think before they tweet from now on. Incorrect advice can have incredibly damaging effects socially, but sometimes a crisis can bring about a tipping point where people begin to take social and online safety more seriously.”
“Setting the precedent for the future of social media censoring”
Facebook, which owns both Whatsapp and Instagram, has faced criticism for its inaction on the issue of misinformation. According to Avaaz, fake news related to the US presidential election was viewed 86 million times between September and November of last year.
Although it appears to be swiftly implementing its strategy to limit coronavirus misinformation, with information from the World Health Organisation the first thing to come up when searching for coronavirus, it encountered problems yesterday after a software bug caused legitimate sources of information to be labelled as spam.
When it comes to monitoring encrypted private Whatsapp messages or private Facebook groups, flagging misinformation becomes more challenging, making these environments potential hotbeds for false claims. According to Politico, Whatsapp risks becoming “a key arena for the spread of misinformation” after clampdowns on Facebook.
Overall, the current situation will be a test of how social media platforms respond to the issue of coronavirus misinformation, and could leave them better prepared to deal with its spread in the future.
Dr Iain Brown, Head of Data Science, SAS UK & Ireland believes that AI could play a role in handling the situation:
“As millions shift to working from home, social media platforms are turning to automation and AI to moderate content and keep billions of users safe. As a slew of coronavirus misinformation hits social media, it’s more important than ever that these AI-based decisions can tackle the spread of fake news quickly and accurately.
“AI is capable of this task, equipped to trace the source of problematic material (often generated by malicious AI counterparts), subsequently alerting companies and helping them develop a counter-bot. Machine learning and CAPTCHAs can also help social media companies to spot the tell-tale signs of fake news.
“The implementation of these technologies in a time of crisis sets the precedent for the future of social media censoring, as user numbers continue to boom worldwide. This is a test for both AI and the workforce, as they work side by side to strike the balance between taking down problematic material and stifling truth. Ultimately, AI decisions must be made in the interest of humanity, part of a greater campaign to encourage fact-checking and truth.”