Synthetic video is a systemic threat to the way we prove online identity. As digital self-service becomes the default channel for high-value transactions and account opening, a new wave of identity proofing solutions is required to overcome the challenges posed by ‘Deep Fakes’ and to discern what is genuine from what is generated.
The wide availability of deep learning software and AI technology has consumerised the capability to create fake images and videos. Mobile apps including FakeApp and Lyrebird allow anyone with a smartphone to create these Deep Fake videos, which can knit a photograph of a face to a pre-existing video, or create an entirely new video with photo-realistic facial movement. Initially, these were used as a kind of digital entertainment, putting celebrity figures’ faces into well-known video footage. This took a more sinister turn when some users began combining these methods with pornographic content, and commentators recognised the privacy implications of this emerging technology. Subsequently, when the video of a realistic looking and sounding Obama speech was manufactured, media organisations started to take notice of the potentially harmful consequences of Deep Fake video on political discourse in the era of ‘fake news’.
However, few commentators have considered the extent to which our digital identities can be replicated using Deep Fake, and the potential impact on the system of trust that underpins finance, online commerce, and national identity programmes.
Using publicly available information including a victim’s Facebook profile photo, a digital identity can be recreated in the form of a speaking, blinking, head-moving video. This ‘fake you’ represents a new and growing threat to methods of identity proofing that rely on video footage of a person performing different activities, for example matching the face to the ID card photograph. Digital bank account opening often relies on these methods for KYC (Know your customer) processes, whether in a digital video conversation with a remote agent or a fully automated solution. Even services which rely on a random sequence of actions displayed on the handset (‘Read these words, blink twice, look left’) can now be effectively spoofed with a feed of artificial video, generated automatically and in real-time directly from the instructions as they are sent to the handset. Exploits have been published where attackers have hijacked the user’s device and bypassed the camera and microphone to load this generated video directly into the video stream from the client device to the authentication service.
To defeat the challenge of fake video, we cannot be caught in the user experience death spiral of making instructions ever-harder to follow, in the hope that computers can’t catch up. Firstly because computers will keep up, and secondly because real humans won’t. As instruction-following becomes more onerous for humans, the only users passing authentication will be the Deep Fake bots.
Instead, banks need an entirely new approach to proving a remote user’s genuine presence, which does not force users to follow a series of complex instructions. iProov is the only device-independent method in the world which can prove a user’s genuine presence using controlled illumination.
This method allows us to detect and protect against highly sophisticated attacks, including artificial video, replay, and masks. In each authentication attempt, we generate a unique one-time colour sequence code in our servers and send it to the user’s device just as the authentication begins. The user’s screen flashes for 2.5 seconds with the unique colour sequence; the Flashmarks. Flashmark colour sequences are never repeated in a user’s lifetime, creating one-time biometrics which cannot be reused and is worthless if stolen. Critically, Flashmarked iProov captures generate contain such complexity in the markers we track that the computing requirements to generate a fake flashmarked video in real-time are beyond the reach of any potential attacker. Simultaneously, users have no complex instructions to follow and can complete an iProov in a single action of holding their phone while the colours appear briefly on the screen, resulting in high first-time journey completion rates.
The threat of artificial video to financial systems is real and growing, and security leaders are only now recognising the risk of large-scale identity theft. As Deep Fake video exploits become more commonplace, iProov is ready to support retail banking platforms become more resilient to the new wave of Deep Fake attacks.