Decades of pop culture have given us an intrinsic fear that eventually artificial intelligence (AI) will rise against us. But when it comes to AI cyberattacks, the AI should fear humans – not the other way round.

Research published today by the SHERPA consortium, an EU project looking into AI’s impact on ethics and human rights, has found that while there is no evidence of hackers yet creating new AI to perform cyberattacks, malicious actors are attacking existing AI and machine learning systems and manipulating them for their own benefits.

Such AI systems are harmless programs used to power a host of online systems, including search engines, social media and recommendation websites, but are being abused by hackers for a range of malicious purposes.

For many of us, the news may come as a surprise, according to Andy Patel, a researcher at SHERPA member F-Secure’s Artificial Intelligence Centre of Excellence.

“Some humans incorrectly equate machine intelligence with human intelligence, and I think that’s why they associate the threat of AI with killer robots and out of control computers,” he said.

“But human attacks against AI actually happen all the time. So ironically, today’s AI systems have more to fear from humans than the other way around.”

How well do you really know your competitors?

Access the most comprehensive Company Profiles on the market, powered by GlobalData. Save hours of research. Gain competitive edge.

Company Profile – free sample

Thank you!

Your download email will arrive shortly

Not ready to buy yet? Download a free sample

We are confident about the unique quality of our Company Profiles. However, we want you to make the most beneficial decision for your business, so we offer a free sample that you can download by submitting the below form

By GlobalData
Visit our Privacy Policy for more information about our services, how we may use, process and share your personal data, including information of your rights in respect of your personal data and how you can unsubscribe from future marketing communications. Our services are intended for corporate subscribers and you warrant that the email address submitted is your corporate email address.

Sybil attacks: How humans are attacking AI

One notable type of attack against AI is known as a Sybil attack, which sees a malicious actor creating numerous fake accounts to manipulate the data an AI uses to operate. This enables the attacker to skew results in their favour, such as with search engine rankings or recommendation systems.

“Sybil attacks designed to poison the AI systems people use every day, like recommendation systems, are a common occurrence. There’s even companies selling services to support this behaviour,” explained Patel.

“These types of attacks are already extremely difficult for online service providers to detect and it’s likely that this behavior is far more widespread than anyone fully understands.”

Fake content: The future of AI cyberattacks

According to the SHERPA study, AI cyberattacks are set to evolve in the future, with one of the most notable upcoming threats being the creation of fake content.

AI can already create highly realistic content – be it in written, audio or visual form – and in some cases AI models have remained unpublished due to the potential for abuse.

“At the moment, our ability to create convincing fake content is far more sophisticated and advanced than our ability to detect it. And AI is helping us get better at fabricating audio, video, and images, which will only make disinformation and fake content more sophisticated and harder to detect,” said Patel.

“And there’s many different applications for convincing, fake content, so I expect it may end up becoming problematic.”


Read more: Forget about The Terminator — we should be worrying about AI malware first