July 11, 2019

AI cyberattacks: Humans attackers are the real threat

By Lucy Ingham

Decades of pop culture have given us an intrinsic fear that eventually artificial intelligence (AI) will rise against us. But when it comes to AI cyberattacks, the AI should fear humans – not the other way round.

Research published today by the SHERPA consortium, an EU project looking into AI’s impact on ethics and human rights, has found that while there is no evidence of hackers yet creating new AI to perform cyberattacks, malicious actors are attacking existing AI and machine learning systems and manipulating them for their own benefits.

Such AI systems are harmless programs used to power a host of online systems, including search engines, social media and recommendation websites, but are being abused by hackers for a range of malicious purposes.

For many of us, the news may come as a surprise, according to Andy Patel, a researcher at SHERPA member F-Secure’s Artificial Intelligence Centre of Excellence.

“Some humans incorrectly equate machine intelligence with human intelligence, and I think that’s why they associate the threat of AI with killer robots and out of control computers,” he said.

“But human attacks against AI actually happen all the time. So ironically, today’s AI systems have more to fear from humans than the other way around.”

Sybil attacks: How humans are attacking AI

One notable type of attack against AI is known as a Sybil attack, which sees a malicious actor creating numerous fake accounts to manipulate the data an AI uses to operate. This enables the attacker to skew results in their favour, such as with search engine rankings or recommendation systems.

“Sybil attacks designed to poison the AI systems people use every day, like recommendation systems, are a common occurrence. There’s even companies selling services to support this behaviour,” explained Patel.

“These types of attacks are already extremely difficult for online service providers to detect and it’s likely that this behavior is far more widespread than anyone fully understands.”

Fake content: The future of AI cyberattacks

According to the SHERPA study, AI cyberattacks are set to evolve in the future, with one of the most notable upcoming threats being the creation of fake content.

AI can already create highly realistic content – be it in written, audio or visual form – and in some cases AI models have remained unpublished due to the potential for abuse.

“At the moment, our ability to create convincing fake content is far more sophisticated and advanced than our ability to detect it. And AI is helping us get better at fabricating audio, video, and images, which will only make disinformation and fake content more sophisticated and harder to detect,” said Patel.

“And there’s many different applications for convincing, fake content, so I expect it may end up becoming problematic.”


Read more: Forget about The Terminator — we should be worrying about AI malware first


Verdict deals analysis methodology

This analysis considers only announced and completed cross border deals from the GlobalData financial deals database and excludes all terminated and rumoured deals. Country and industry are defined according to the headquarters and dominant industry of the target firm. The term ‘acquisition’ refers to both completed deals and those in the bidding stage.

GlobalData tracks real-time data concerning all merger and acquisition, private equity/venture capital and asset transaction activity around the world from thousands of company websites and other reliable sources.

More in-depth reports and analysis on all reported deals are available for subscribers to GlobalData’s deals database.

Topics in this article: ,