In 2018, a video in which the face of actor Nicolas Cage was imposed onto video footage of President Donald Trump went viral online. What may seem like a harmless and amusing video in fact highlights the growth in a new sinister form of misleading content: deepfake videos.
Using AI, fake videos are generated, making subjects appear to say things they have never said, and this has the potential to worsen the problem of disinformation and trigger political conflict.
With Facebook’s role in the spread of disinformation online prompting many to call for the platform to do more to combat the issue, the company has taken a stand against deepfake videos.
In a blogpost by Monika Bickert, the company’s vice president of global policy management, Facebook said that it is addressing what it describes as a “significant challenge for our industry and society as their use increases”.
Through partnerships with “academia, government and industry” the company is updating its policy on “misleading manipulated videos” or deepfakes, using a combination of AI tools and fact-checkers to identify deepfake content.
If material meets two criteria, being edited in such a way as to deceive the viewing into thinking the subject said something they didn’t, or uses AI or Machine Learning in a way that “merges, replaces or superimposes content onto a video” it will be removed.
However, Facebook has said this does not apply to “content that is parody or satire, or video that has been edited solely to omit or change the order of words”. Videos identified as false or partly false, will come with a warning to viewers alerting them to this.
Last year, Facebook launched its Deep Fake Detection Challenge, encouraging people “to produce more research and open source tools to detect deepfakes”.
Facebook deepfakes policy: The “illusion of progress”
According to deepfake research and detection company Deeptrace, the number of deepfake videos jumped by at least 84% in a year between 2018 and 2019. O’Reilly identified “machine deception” as one of its key AI trends to watch in 2020.
However, although deepfakes are undoubtedly a growing concern that must be addressed, some have criticised Facebook’s strategy.
Facebook’s commitment to fully addressing the issue of disinformation on its platform has been called into question. According to the Washington Post, the decision not to remove misleading video content that has been doctored with video editing software but is not computer-generated, such as a video that went viral last year edited to make House Speaker Nancy Pelosi appear drunk, fails to address the wider problem.
According to the Washington Post, Bill Russo, spokesman for 2020 presidential candidate Joe Biden said that the decision created the “illusion of progress” without addressing the core issue.
Others have asked how reliably the company will be able to detect deepfakes. Last year, researchers from the USC Information Sciences Institute developed a tool that can spot deepfakes with 96% accuracy based on subtle facial movements. However, as the ability to produce convincing fakes advancing rapidly, keep up will be a challenge.
In fact, Hao Li, associate professor at the University of Southern California told The Verge that at some point in the future, it is “likely that it’s not going to be possible to detect [AI fakes] at all”, so new strategies are needed.
A bold claim from Facebook
Jake Moore, cybersecurity specialist at ESET explains that the software used to spot deepfakes is still at the early stages:
“Deepfakes are increasingly more difficult to spot and we desperately require the help from artificial intelligence. Fake videos of famous or powerful people can be extremely manipulative, causing extremely damaging effects in some cases. It is a bold claim from Facebook to ban all such false videos from their platform, as the software used to recognise them is still in its immature phase and requires more research to be effective.
He believes that educating the public about the risk of false information online is also important in tackling the problem:
“Most videos are altered in some way before they land on social media so there is the potential of teething problems with false positives- or even letting a number of genuine deepfakes slip through the net. Not only do we need better software to recognise these digitally manipulated videos, we also need to make people aware that we are moving towards a time where we shouldn’t always believe what we see.”