Progress in generative Artificial Intelligence (AI) are supporting some compelling use cases across many industries, but the technology can also be used by bad actors to inflict harm.

Cyber criminals can harness synthetic AI to spread misinformation, cause harm to organizations and potentially to profit from their attacks. Some deepfake incidents are highlighting how this is playing out in real time.

In one of the most recent high-profile incidents, actor Tom Hanks rebuke an advertisement for a dental plan for using an AI-generated image of him without his consent or endorsement. In a separate case, the “CBS Mornings” host Gayle King said a deepfake video using her likeness and voice to hawk a weight loss product was created and distributed without her permission. The synthetic AI video was built using a sanctioned post publicizing King’s radio show.

AI deepfakes in cyber attacks

Enterprise executives are also concerned about the potential risks associated with malicious synthetic content that can be used to impersonate executives, damage corporate reputations and for extortion. In August, cyber attackers successfully used synthetic AI to replicate an employee’s voice to breach software development company Retool. The hackers initiated the attack using SMS-based text messages that spoofed an IT staffer’s mobile number. Claiming to have a payroll issue, the cyber criminals were able to lure one employee to click on a link in a phishing message.

The URL took the staffer to an illegitimate Internet portal that included a multi-factor authentication form through which the attackers used synthetic AI to emulate an actual employee’s voice.

The attackers took over 27 Retool customer accounts, altering user emails and resetting passwords of the cryptocurrency companies. While Retool uncovered and mitigated the breach quickly, at least one client lost $15 million in cryptocurrency.

How well do you really know your competitors?

Access the most comprehensive Company Profiles on the market, powered by GlobalData. Save hours of research. Gain competitive edge.

Company Profile – free sample

Thank you!

Your download email will arrive shortly

Not ready to buy yet? Download a free sample

We are confident about the unique quality of our Company Profiles. However, we want you to make the most beneficial decision for your business, so we offer a free sample that you can download by submitting the below form

By GlobalData
Visit our Privacy Policy for more information about our services, how we may use, process and share your personal data, including information of your rights in respect of your personal data and how you can unsubscribe from future marketing communications. Our services are intended for corporate subscribers and you warrant that the email address submitted is your corporate email address.

VMware’s 2022 Global Incident Response Threat Report reported that two-thirds of cyber security professionals queried said that deepfakes were used as part of a breach against their business, a 13% jump from the prior year. E-mail was most frequently used to deliver the harmful content. With the 2024 presidential election looming, the expectation is that there will be an even greater proliferation of deepfakes to spread misinformation and disinformation.

US government guidance

In September, the National Security Agency (NSA), the Federal Bureau of Investigation (FBI), and the Cybersecurity and Infrastructure Security Agency (CISA) published guidance on how to identify and respond effectively to malicious synthetic threats. The Contextualizing Deepfake Threats to Organizations document outlines deepfake patterns and techniques. The agencies counsel organizations to take the 18-page document’s recommendations on how to prepare, identify, defend against, and respond to deepfakes as action items.

The document details steps enterprise organizations should take to both detect and respond to harmful synthetic AI. It also outlines public/private collaboration work being done to arm organizations against deepfakes. These include efforts from the DARPA Semantics Forensics program to build advanced semantic capabilities for media forensics and authentication. Contributors include NVIDIA, PAR Government Systems, SRI international and a number of research institutions.