Oxford Languages define a deepfake as “a video of a person in which their face or body has been digitally altered so that they appear to be someone else”, and unsurprisingly, the negative applications of deepfake technology totally outweigh the benefits.
However, measures are being put in place to fight malicious uses of the technology. For example, new amends to the online safety bill will make it illegal to share deepfake pornography without consent. This has been a long-awaited announcement as the legislative process has been dubbed too slow to keep up with technological advancements, especially by NotYourPorn campaigner Katie Issacs.
Deepfake technology has become increasingly accessible and no longer requires specialist skills to use. Very simple apps and computer programs enable anyone to use this technology in just a few clicks, which has meant that it can easily be used for damaging and unethical purposes.
This then poses the question; why is this technology developed and what, if any, are the beneficial applications? Some argue that deepfake technology can begin to democratize art, gaming, comedy, storytelling, and advertising as it can be a much cheaper way of significantly increasing the standard and scale of creative projects. Another argument for the development of this technology is the reconstruction of crime scenes, however, this is not a use case for the general public but instead would only be for law enforcement.
One of the few compelling arguments for this technology is to anonymize journalists, activists, or witnesses who need to keep their identities concealed for their own safety. Documentaries have already started incorporating this technology, like ‘Into the Deep’ or ‘Welcome to Chechnya’, which use deepfake technology to interview people without revealing their identities.
The list of negative applications of this technology is very lengthy and includes fake pornography, fake news, fraud, false imprisonment, extortion, slander, and even terrorism.
How well do you really know your competitors?
Access the most comprehensive Company Profiles on the market, powered by GlobalData. Save hours of research. Gain competitive edge.
Your download email will arrive shortly
Not ready to buy yet? Download a free sample
We are confident about the unique quality of our Company Profiles. However, we want you to make the most beneficial decision for your business, so we offer a free sample that you can download by submitting the below formBy GlobalData
In fact, the negative consequences of this technology appear to be endless, even leading to the erosion and eventual downfall of any type of objective news, as deepfakes become completely indistinguishable from reality. The introduction of deepfakes will act like petrol upon the fire of misinformation, which is already rapidly spreading through social media. The introduction of this proposed legislation is very welcome, however, many more questions need to be answered first. How will people be prosecuted? What happens if someone is not aware they are spreading and sharing deepfake content?
The responsibility for regulating deepfakes should not only fall upon law enforcement, as this process can be much too slow. It should also be shouldered by Big Tech and social media platforms, which are the principal contributors to the spread. IBM has announced that as part of its Responsible AI work, they have developed a real-time deepfake detector called ‘FakeCatcher’, which has an accuracy rating of 96%. This technology could potentially be used to help prosecute deepfake creators and spreaders through the identification of deepfakes.
Deepfake technology, like every form of new technology, will develop too quickly for traditional legislation to compete. This then poses the question; instead of just regulating the spreading of deepfake videos, should we not be regulating the development and use of the technology itself? After all, the technology has become a little too accessible. If the negatives outweigh the positives to such an extent, then surely the use of the technology in any scenario should be monitored, perhaps with only a select few authorized to do so.