Deep fake video and image-altering technology has evolved significantly over recent years, and is set to become the latest weapon in a war of political disinformation.

The image is so distressing, few mainstream media channels will publish it without a warning.

A grinning soldier holds a bloody knife at a child’s throat. It’s unclear whether the child’s throat has already been cut, or about to be. The child is holding a lamb. A text caption underneath the image reads: ‘Don’t be afraid, we are coming to bring you peace!

The image is certainly intended to be shocking, but perhaps more worrying is its source: An official Chinese government Twitter account, posted by the Chinese foreign ministry spokesman Lijian Zhao.

The image is a reference to the recent outrage caused by the publication of an Australian Defense Force (ADF) inquiry, indicating that around 25 Australian soldiers may be guilty of murdering 39 Afghan captive prisoners and civilians in Afghanistan between 2009 and 2013.

The Australian government hasn’t sought to cover up the results of the inquiry, but it has indeed taken offense at the Chinese official’s Twitter image – and not just because of its lack of diplomacy, or the fact that the insult has further damaged the already-strained relations between the two countries. The photo itself is a fake, an entirely doctored image clearly intending to spark an emotional response. A new, and likely dangerous practice in international diplomatic bear-baiting has just reared its head.

How well do you really know your competitors?

Access the most comprehensive Company Profiles on the market, powered by GlobalData. Save hours of research. Gain competitive edge.

Company Profile – free sample

Thank you!

Your download email will arrive shortly

Not ready to buy yet? Download a free sample

We are confident about the unique quality of our Company Profiles. However, we want you to make the most beneficial decision for your business, so we offer a free sample that you can download by submitting the below form

By GlobalData
Visit our Privacy Policy for more information about our services, how we may use, process and share your personal data, including information of your rights in respect of your personal data and how you can unsubscribe from future marketing communications. Our services are intended for corporate subscribers and you warrant that the email address submitted is your corporate email address.

2020 the year of deep fake

Doctored photos, deep fake video and other image-altering technologies have become a hallmark of the Internet age, and this episode serves to underscore the likely important role these technologies are set to play in the spread of disinformation on the international stage. But the problem is, it’s likely to get worse before it gets better.

It’s likely that 2020 will go down in history as a remarkable year for many reasons, but certainly one milestone moment for the Tech sector was when Twitter decided to start taking responsibility for the spread of misinformation, disinformation and fake news, by labelling questionable or erroneous Twitter post claims as such.

In May 2020, Twitter put its first fact-check warning on a Tweet posted by the US President Donald Trump, and continued to do so, throughout the recent Presidential elections to this day.

Users viewing Tweets labelled in this way are often advised to visit a fact-checking site to inform themselves better. But the problem is, fact-checking the veracity of images is very difficult, and about to get harder.

The source, or multiple sources, of images which have been altered by photo editing and super-imposed imaging are impossible to track for most users. Original material, together with creative editing tools, disallows for the equivalent of fact-checking for imagery.

New technology is a blessing and a curse

And then there’s deep fake technology – a new type of AI-aided software which can replace the image of a certain person’s body or face, or both, into a completely disassociated video scenario, to give the impression of the person’s involvement or activity within that scenario.

To be sure, deep fake technology has already been for several years, and has even made it into popular app software, such as the 2018-launched FakeApp, which allowed users to swap the faces of friends in a small section of video, and share the funny video result.

Last month, the technology received another boost in public awareness after Kim Kardashian’s husband gifted his wife a deep fake hologram of her deceased father, appearing to talk to his children, and giving them his perspective on life, from the grave.

Posting a video of this hologram, Kim Kardashian commented “For my birthday, Kanye got me the most thoughtful gift of a lifetime.”

Right there, it’s easy to understand the level of worry around the mainstream usage of such technology. Fake news works by telling people what they want to hear, but deep fake shows it to them, in moving, talking images, for the ultimate in suspended belief – or even disinformation.

They say the camera never lies. But imagery certainly can, and the tools for its manipulation are getting better, and more sophisticated, all the time.