As artificial intelligence (AI) becomes more widespread and powerful, it is increasingly vulnerable to exploitation by terrorists, cyber criminals and rogue states, researchers have warned.

In a new report, the Malicious Use of Artificial Intelligence, 26 experts across technology and security sectors outline a variety of new threats being ushered in with the dawn of AI.

Although the new technology, which has been under development since the 1950s, has a fast expanding network of useful and life-saving applications, such as the collection and analysis of vast amounts of big data to prevent crimes and attacks and industrial assembly-line robotics, less attention has been paid to emerging dangers.

In what is claimed to be one of the first in-depth studies of the rising risks of AI, the report’s authors — including Elon Musk’s research group, Open AI — warn of scenarios such as attackers using AI to develop weapons capabilities to attack victims on a larger and more devastating scale, non-state actors weaponising consumer drones to carry out aerial bombardments, or rogue states hacking into state security apparatus to spread disinformation or leak classified information.

The report’s co-author Seán Ó hÉigeartaigh, executive director of Cambridge University’s Centre for the Study of Existential Risk, said the report is a call to action for governments, institutions and individuals around the world.

“Artificial intelligence is a game changer and this report has imagined what the world could look like in the next five to ten years.

How well do you really know your competitors?

Access the most comprehensive Company Profiles on the market, powered by GlobalData. Save hours of research. Gain competitive edge.

Company Profile – free sample

Thank you!

Your download email will arrive shortly

Not ready to buy yet? Download a free sample

We are confident about the unique quality of our Company Profiles. However, we want you to make the most beneficial decision for your business, so we offer a free sample that you can download by submitting the below form

By GlobalData

“We live in a world that could become fraught with day-to-day hazards from the misuse of AI and we need to take ownership of the problems – because the risks are real.”

“For many decades hype outstripped fact in terms of AI and machine learning. No longer.”

An understanding of the threats posed by AI must be incorporated into the building of digital infrastructure and design AI systems, the report recommends.

They also call for policymakers and researchers to collaborate closely in investigating and preventing malicious uses of AI, while researchers and engineers to be mindful of the dual-use of AI, and its potentially harmful applications, to help identify misuse cases and reach out to the relevant authorities when harmful activity is detected or foreseeable.

One of the biggest threats posed by the growing prevalence of AI, is that as the number of humans with training and expertise in the field expands, expanding the likelihood of cases of abuse.

One such example is an upsurge in spear phishing attacks that use personalised messages to extort sensitive information or money from individuals by pretending to be a trusted source, like one of the target’s friends, colleagues, or contacts.

AI also holds the potential to increase anonymity in crimes.

For instance, the use of autonomous lethal weapons removes the necessity of assassins being present at the scene of the crime, the report says.

At the same time the increasing availability and declining cost of hardware such as drones makes it increasingly easy for terrorists such as Islamic State to configure consumer drones for aerial-based attacks.

While AI technology is still in its infancy, its also rife with vulnerabilities that make it open to hacking by malicious parties.

The report points to the emergence of self-driving cars, that could be hacked to crash, or failures in autonomous weapons systems leading to mass friendly fire or civilian targeting.

The researchers wrote:

This report looks at the practices that just don’t work anymore and suggests broad approaches that might help: for example, how to design software and hardware to make it less hackable – and what type of laws and international regulations might work in tandem with this.

Duncan Tait, regional chief executive at tech company Fujitsu, said governments must work with businesses to ensure AI works for people, not against them.

Governments, businesses and industry bodies have to prepare for the potential influence of artificial intelligence. We must understand how AI can be used and evaluate the regulation and controls needed. We must take a measured approach to its impact on society, and concentrate on re-skilling employees and job security.

However, it’s also critical that we leverage the incredible value of artificial intelligence to support prosperity and well being.