August 4, 2020

Deepfakes biggest concern for AI in crime: Experts

By Lucy Ingham

Criminal scientists have ranked deepfakes as the most serious and worrying application of artificial intelligence (AI) in future crime and terrorism, according to a report published by UCL.

Published in the journal Crime Science, the research explored how AI could be used to assist crime within the next 15 years, ranking different applications based on their ability to prevent them, ease of use, level of harm and criminal gain.

It found that deepfakes – multimedia content that has been edited using AI to make the subject appear to be doing or saying something they have not actually done – was the biggest concern for crime, particularly given how hard it would be to identify and prevent.

Potential uses of deepfakes include discrediting public figures and impersonating family members to extract funds.

It led a list of 20 AI applications in crime compiled by the researchers, which were analysed with assistance of 31 AI experts. Others that were considered to have high concern included spear phishing, using driverless cars as weapons and AI-authored fake news.

Deepfakes use in crime highlights need to prepare

While deepfakes have so far remained more of an abstract concern than one appearing in real crime situations, the report highlights that the technology is likely to be become increasingly popular among criminals in the future.

“Deepfake technology is one of the biggest threats to our lives online right now, and UCL’s report shows that deepfakes are no longer limited to dark corners of the internet,” said Joe Bloemendaal, head of strategy at Mitek.

“The technology has already been used to impersonate politicians, business leaders and A-listers – hitting the UK political scene in last year’s general election. Now, we can expect to see deepfakes playing a major role in financial crime, as fraudsters try to use deepfaked identities to open new accounts or access existing ones.”

For policymakers and law enforcement, the report highlights the growing need to prepare for an increased prevalence of AI-assisted crimes in the future.

“We live in an ever-changing world which creates new opportunities – good and bad. As such, it is imperative that we anticipate future crime threats so that policy makers and other stakeholders with the competency to act can do so before new ‘crime harvests’ occur,” said Professor Shane Johnson, director of the Dawes Centre for Future Crimes at UCL.

“This report is the first in a series that will identify the future crime threats associated with new and emerging technologies and what we might do about them.”

Read more: Trump’s Buffalo protester conspiracy theory shows why we are not ready for deepfakes