Films like Minority Report used to be firmly the stuff of an abstract sci-fi future. A futuristic thriller starring Tom Cruise, the protagonist is accused of murder based on a psychic vision, which is used as hard ‘video’ evidence by law enforcement. In fact, the main character’s identity has been compromised and redefined by a video that is neither accurate nor real.
The rise of deepfakes
The film explores the force of free will against determinism, but it’s not quite as abstract as it once seemed. Watching it now, the movie could easily have been a warning against the rise of deepfake technologies. Deepfakes are a relatively new phenomenon. Yet their potential to distort and manipulate identity could be limitless, with the technology coming hot on the heels of the exponential growth of fake news, a roadblock to separating reality from fiction. A ‘gut feeling’ is no longer enough to ensure trust for either the companies or the consumers who are at risk.
What started out as quirky videos of Steve Buscemi’s face speaking from Jennifer Lawrence’s body has morphed into a much more dangerous issue. There is much debate and speculation around the (mis)use of AI-driven video to impersonate politicians – like the one of Boris Johnson and Jeremy Corbyn endorsing each other in the UK general election currently underway. This is the most recent example of unbelievable ‘real’ deepfakes that can potentially be used as propaganda – if not identified and called out as such in time.
We’ll likely see more examples of how deepfakes can facilitate the fabrication of false stories and be used as convincing photo, video or audio evidence to make it appear like someone said or did something, where the fraud is difficult to spot with the naked eye.
A new form of fraud
However, as deepfake technology matures, driven by AI, big data and advances in digital manipulation of image and sound, a much more concerning use of the phenomenon is emerging. Deepfake technologies are being used to produce nearly flawless falsified digital identities and ID documents – and the range and quality of the technologies available to fraudsters is the underlying driver. Technology that was once considered to be available to a few industry specialists – using it for the right reasons – has now become public property.
Identity fraud is an age-old problem, but it was inevitable that innovation would at some point take this to the next level. Banks and financial institutions already employ thousands of people to stamp our identity theft, but to fight the ever-growing threat of these new forms of fraud, businesses must look to technology to spot what the human eye can’t always see.
Technology that can verify identities – of customers, partners, employees, or suppliers, for example – is not new. It’s become a must-have, especially in regulation-heavy industries like banking.
The good news is that the ability to identify deep fakes will only improve with time, as researchers are experimenting with AI that is trained to spot even the deepest of fakes, using facial recognition and, more recently, behavioural biometrics. By creating a record of how a person types and talks, the websites they visit, and even how they hold their phone, researchers can create a unique digital ‘fingerprint’ to verify a user’s identity and prevent any unauthorised access to devices or documents. Using this technique, researchers are aiming to crack even the most flawless of deepfakes, by pitting one AI engine against another in the hope that the cost of ‘superior AI’ will be prohibitive for cybercriminals.
Meanwhile, the tech industry is testing other safeguards. Apple has expanded its Near Field Communication (NFC) on devices to enable them to unlock and read data stored in security chips of passports. Once unlocked, it allows the device to see biometric data and high-res photo of the biometric document owner, and is now being used by the UK Home Office to enable European citizens to apply for Settled Status through both iPhone and Android apps.
For the deepfakes antidote to work as intended, biometrics will need to keep pace with cutting edge innovation. Biometric technologies will always be a preventative measure – the only real indicator of who you really are. Overall, the adoption of biometrics as an option for digital identity verification, or account and device sign-in, is a reliable measure of security.
Faking digital identities
New websites like thispersondoesnotexist.com, however, can generate incredibly life-like images, revealing how fraudsters can become very realistic actors within the digital ecosystem. This means that fake digital identities could impact nearly every organisation that operates a digital onboarding solution – which, in the age of the instant real-time experience, is any company who wants to win new digital-savvy customers.
Forewarned is forearmed, as they say. To minimise potential threats before new regulations are introduced to curb the threat, businesses look to technology to help verify the identities of customers through AI and machine learning at the point of onboarding. In addition, these tech providers should be up to date on the most pressing risks and the technologies or techniques making the rounds among malicious threat actors.
3 Things That Will Change the World Today
Additionally, from an ethical perspective, companies must be able to not only capture various customer biometrics, but also to determine how best to safeguard the information. It’s better to be safe than sorry – no organisation wants a data leak, never mind one that means it becomes the next source of data used to create deepfakes.
Ultimately, the potential ramifications of deepfakes should act as a cautionary tale. The threat we’re facing should be a springboard for action for businesses, governments and consumers alike. Only then can we stamp out the sci-fi future before it can even begin.