Amid the ongoing ‘MOVEit’ hack by Russian cybergang Clop Group, the CEO of a leading cybersecurity company has said that artificial intelligence (AI) will enhance current cyberattack methods – and create new ones.

Matt Cohen, who became the CEO of Massachusetts-based, CyberArk, earlier this year, has stoked fears that generative AI will drive the phenomenon of “vishing”, a voice-generated advancement on phishing hacks.

“It’s so easy to create AI-driven voices”, says Cohen. “Picture your boss leaving you a voicemail telling you to send data to this site or email address. Or picture your mother leaving a voicemail saying she needs credit information on a consumer level. You’re more likely to do that if it’s a personalised voice. These tools are a scary component of AI”.

Public concerns around AI and cybersecurity have intensified after a rise in AI-generated online disinformation and the recent MOVEit hack, which affected government departments including the US Energy Department and the UK telecom regulator, as well as private sector companies such as Sony, Shell, Boots, British Airways and the BBC.

ChatGPT enables auto-creation of malware

Research carried out by Cohen’s team has demonstrated generative AI’s capacity to construct more sophisticated malware at a much faster rate.

CyberArk’s research team used OpenAI’s ubiquitous generative AI platform ChatGPT as a test subject. According to Cohen, “We did an experiment in our labs showing that ChatGPT could create polymorphic malware … which is going to allow for the auto-creation of malicious code”.

How well do you really know your competitors?

Access the most comprehensive Company Profiles on the market, powered by GlobalData. Save hours of research. Gain competitive edge.

Company Profile – free sample

Thank you!

Your download email will arrive shortly

Not ready to buy yet? Download a free sample

We are confident about the unique quality of our Company Profiles. However, we want you to make the most beneficial decision for your business, so we offer a free sample that you can download by submitting the below form

By GlobalData
Visit our Privacy Policy for more information about our services, how we may use, process and share your personal data, including information of your rights in respect of your personal data and how you can unsubscribe from future marketing communications. Our services are intended for corporate subscribers and you warrant that the email address submitted is your corporate email address.

Through an encryption key, mutation engine and self-altering code, polymorphic malware continuously shifts its shape and signature. Commonly seen in the form of viruses and bots, this malware is harder for conventional cybersecurity systems to detect.

Cohen emphasises that “ChatGPT is able to create a credential stealer through cookie harvesting” – but credential theft on ChatGPT itself has also become a pressing issue.

Research from Singapore-headquartered cybersecurity analysts Group-IB identified 101,134 stealer-infected devices with saved ChatGPT credentials. The highest concentration of these devices is in the Asia-Pacific region.

The more immediate threat around AI, however, is posed by the increased effectiveness of current cyberattacks. According to Cohen, “Phishing is common, but there are misspellings and other giveaways that it comes from a malicious actor. AI can make phishing attacks higher quality, more readable and easier to interpret.”

In 2022, phishing was one of the most common type of cyberattack, with spear-phishing via attachments accounting for 25% of hacks, according to the IBM Security X-Force Threat Intelligence Index.

AI for national cybersecurity

Conversely, there is undeniable potential for AI to safeguard data, rather than just for cyberattacks.

While AI has become the new arms war, it has a weaponry component in both offensive and defensive cybersecurity, according to Cohen. “We are starting to embed AI into our tools to analyse data, to better interpret when something is a real threat, to elevate authentication based on user behaviour and analytics, and to roll out tools more easily,” he says.

This is an area that cybersecurity companies – and government defence agencies – are monitoring closely. Cybercrime tools used in conflicts like the Russian-Ukraine war can fall into the hands of organised crime groups.

In 2017, the NotPetya attack, a malware created during the conflict, was used to attack companies including Danish shipping giants Maersk.

“We’ve seen it in this version of the Russia-Ukraine conflict, where operational technology hacking tools have gone from government to ransomware groups. Then you have nation-states conducting hacks and breaches simply as part of policy to steal government information and private intellectual property to help spur on their innovation and industries,” says Cohen.

Secrecy is the core difference between attacks on private sector companies and government bodies. While a nation-state does not want the hack victim to know that data has been stolen, cybergangs often make the hack public knowledge in a bid to extort victims.

Sometimes these lines become blurred. The recent MOVEit attack targeted both private and public sector – but questions of state involvement were raised over the Russian origins of Clop Group, and that the attack involved Britain’s national airline (BA) and national broadcaster (the BBC).

The global consequences of cyberattacks are set to intensify should the development of AI-based attack methods continue to proliferate.