Do you know what a deepfake is? If not, you’re not alone. A survey has revealed that 72% of UK residents have never heard of the video alteration technique before.
Deepfakes use artificial intelligence to realistically superimpose a face onto another’s body in a video format. At the moment, the tech is mostly used to create fake celebrity pornography and revenge porn. In recent years, deepfakes have also been made of political figures, such as this example of ‘Barack Obama’ warning of the dangers of deepfakes.
But among the survey of 2,000 people in the UK, carried out by biometric facial authentication firm iProov, identity fraud is the biggest concern (42%) when it comes to deepfakes.
Audio versions of deepfakes are already being used as part of phone scams, and as they become more sophisticated there is the danger that they could be used to imitate the voices of CEOs and business leaders.
Who is responsible for tackling deepfakes?
Given that 70% of respondents said they would not be able to tell the difference between a deepfake and a real video, there is growing pressure to create methods to spot them.
But who should be responsible for taking down deepfakes? Just over half (55%) of Brits said social networks should take responsibility for combatting them.
However, when a doctored video of US House of Representatives speaker Nancy Pelosi appeared to show her slurring her words earlier this year, Facebook initially refused to take it down.
Facebook chief Mark Zuckerberg later said the social media firm should have flagged it quicker. In September, Facebook announced it was teaming up with Microsoft to launch a $10m contest for researchers to better detect deepfakes.
Detecting deepfakes is in the interest of online platforms competing in the attention economy, with 72% of respondents replying that they would rather use a platform with deepfake safeguards in place.
But online platforms face a tough balancing act. They must ensure that harmless videos, such as the Bill Hader/ Tom Cruise video below, do not get caught up in a deepfake net trawling for harmful political deepfakes.
Once Brits are given a full definition of deepfakes, 28% said that they viewed them as harmless. However, with a likely general election in the UK on the horizon, there is scope for misuse of deepfakes to sway political opinion.
Andrew Bud, founder & CEO of iProov, said: “Awareness is the first defence against any cyber-security threat, as we’ve already seen with attacks like phishing and ransomware. Deepfakes, however, represent a whole new kind of danger to businesses and individuals.
“Technology also has a big role to play in combating the threat, yet if the vast majority of people in the UK have such little awareness of deepfakes right now, they simply cannot begin to prepare themselves as they need to.”