Meta has announced it uncovered a rising number of malware scams related to ChatGPT this past month (April, 2023).
The Facebook owner claims the cases are reminiscent of cryptocurrency scams.
In a report on Wednesday (May 3, 2023), Meta said it had found 10 malware families and over 1,000 malicious links since March, all of which were being advertised as tools to OpenAI’s smash-hit, ChatGPT.
Meta chief information security officer, Guy Rosen, said in a press briefing logged in the report that “ChatGPT is the new crypto” for bad actors.
In several cases found by Meta, the malware was able to deliver a functioning ChatGPT alongside abusive files, Reuters reported.
“Our research and that of security researchers has shown time and again that malware operators, just like spammers, try to latch onto hot-button issues and popular topics to get people’s attention,” Meta said.
How well do you really know your competitors?
Access the most comprehensive Company Profiles on the market, powered by GlobalData. Save hours of research. Gain competitive edge.
Your download email will arrive shortly
Not ready to buy yet? Download a free sample
We are confident about the unique quality of our Company Profiles. However, we want you to make the most beneficial decision for your business, so we offer a free sample that you can download by submitting the below formBy GlobalData
Adding: “With an ultimate goal to trick people into clicking on malicious links or downloading malicious software, the latest wave of malware campaigns have taken notice of generative AI tools becoming popular.”
Meta said it was now preparing defenses for a variety of potential abuses and scams surrounding products like ChatGPT.
Cybersecurity experts remain worried about ChatGPT
Some experts have previously spoken out about what generative AI products like ChatGPT could mean for cybersecurity.
For example, with the rise of users experimenting with ChatGPT and its ability to write code – some experts are concerned about the potential of it writing ransomware.
“Recent systems can provide individuals who have little or no coding ability with a tool to create or finetune malware that others have created to make it more effective,” Chris Anley, chief scientist at NCC Group, previously told Verdict.
“For example, large language models can generate many variations on a specific piece of malware very easily, so defenses that depend on recognition of a verbatim piece of code – such as basic endpoint detection and response (EDR) software – can sometimes be bypassed by this generated malware.”
GlobalData is the parent company of Verdict and its sister publication.