In an era of fake news, it is now widely known that the spread of false information is rampant online. However, until recently, this has only been done in written form.
In what it predicts will be “a pivotal year in technology, society, health and citizens’ rights”, Nesta has forecast that “deep fake” video will have a significant impact on society next year, one of its ten predictions for 2019.
AI-based technology has made it possible to make realistic videos that are nearly impossible to distinguish from the real thing with the naked eye. Lifelike computer-generated graphics can be used to mislead the public, by appearing to show video evidence of things that never really happened.
In 2017, researchers at the University of Washington in the US demonstrated how the technology could be manipulated, by using a deep learning algorithm to mimic President Obama’s facial expressions and voice, creating videos of the former president appearing to make speeches using words from old interviews.
More recently, Chinese news agency Xinhua unveiled the world’s first AI-powered news reader, almost indistinguishable from its human counterpart.
And at Halloween, How to Generate (Almost) Anything collaborated with deep fakes expert Deep Homage to swap US President Donald Trump into classic films such as Young Frankenstein, pictured above.
Deep fake dangers: How fake video could be the next stage in fake news
Although the technology has a number of use cases, Nesta predicts that, if used for nefarious purposes, it could be used for the next stage in fake news.
The innovation foundation believes that this has the potential to spark a geo-political incident if a politician or celebrity is maliciously impersonated.
The power of fake video was highlighted last month as footage of CNN journalist Jim Acosta was manipulated to show him allegedly assaulting a White House aide.
If fake videos are allowed to circulate online, they could cause substantial damage to diplomatic relations between countries, as well as accelerating the spread of false information.
The technology is developing at a rapid rate, meaning that is now difficult for even digital forensics tools to spot a fake video. They are also becoming easier and cheaper to create, meaning that it may soon be the case that anyone with a computer has the means of creating them.
However, researchers are working on methods to spot a fake video. Altered pixels or biological signals such as off-pace heartbeats or blinking patterns can be used to determine whether a video has been faked or manipulated, a vital tool in slowing the spread of false information.
Celia Hannon, Director of Explorations at Nesta said:
“Nesta’s predictions for 2019 illustrate just how rapidly technologies which would once have been dismissed as Science-Fiction are set to change our everyday lives. Interacting with machines will become the ‘new normal’ – from sitting an exam to seeking legal advice – but we need to tread carefully.
“While these technological advances promise greater convenience and efficiency, many also risk increasing inequality. We will need to find ways to distribute the benefits of progress more widely and protect citizen rights as Artificial Intelligence becomes embedded in the world around us.”