Failing to ban so-called killer robots poses a “real danger to humanity”, says professor of artificial intelligence (AI) and robotics Noel Sharkey.

The long-time campaigner against killer robots told Verdict in an exclusive video interview about the dangers of lethal autonomous weapons (LAWS), which are being developed by militaries to kill without human oversight.

The long-time campaigner against killer robots told Verdict in an exclusive video interview about the dangers of lethal autonomous weapons (LAWS), which are being developed by militaries to kill without human oversight.

He made the warning ahead of the UN’s 6th meeting on LAWS, taking place in Geneva today and lasting throughout the week.

The most recent meeting was in April and saw Austria and China join the list of countries calling for a prohibition on LAWS.

This meeting will see more than 70 governments from around the world meet under the auspices of the Convention of Certain Conventional Weapons (CCW).

How well do you really know your competitors?

Access the most comprehensive Company Profiles on the market, powered by GlobalData. Save hours of research. Gain competitive edge.

Company Profile – free sample

Thank you!

Your download email will arrive shortly

Not ready to buy yet? Download a free sample

We are confident about the unique quality of our Company Profiles. However, we want you to make the most beneficial decision for your business, so we offer a free sample that you can download by submitting the below form

By GlobalData
Visit our Privacy Policy for more information about our services, how we may use, process and share your personal data, including information of your rights in respect of your personal data and how you can unsubscribe from future marketing communications. Our services are intended for corporate subscribers and you warrant that the email address submitted is your corporate email address.

The CCW is a framework instrument whose purpose is to “ban or restrict the use of specific types of weapons that are considered to cause unnecessary or unjustifiable suffering to combatants or to affect civilians indiscriminately.”

It has previously secured a pre-emptive ban on blinding lasers.

Over the course of the week, the mandate for the next year will be laid out. It is hoped that this will be for six weeks of negotiations, which would then be ratified at the CCW’s November meeting.

Campaign groups, such as the Campaign to Stop Killer Robots – for which Sharkey is an active spokesperson – have been pivotal in forcing governments to open a dialogue on a killer robot ban.

Sharkey says that he would like to see more nation-states, particularly European nations, joining the call for a killer robots ban at the convention’s conclusion.

Killer robots, aka lethal autonomous weapons

Progress for a killer robot ban has been slow, with diplomats and disarmament experts failing to agree on a universal approach.

There has been much wrangling about how to define a lethal autonomous weapon.

Sharkey says that those arguments were mainly from people who are trying to put breaks on at the UN.

“It’s quite simple really,” he says. “It’s a weapon that, once it’s been launched, works completely without human supervision.

“So, it selects targets, tracks targets and applies force to them. It can be lethal force or any other kind of force – but violent force.”

“We’re not talking about a terminator with a machine gun”

The phrase ‘killer robots’ is likely to draw the image of Arnold Schwarzenegger’s glowing red-eyed Terminator. The reality is far more grounded, but just as dangerous.

“We’re not talking here about a big terminator-like humanoid thing with a machine gun,” says Sharkey.

“We’re talking here about conventional looking weapons and then they’re just made autonomous.”

These can include autonomous tanks, submarines and drones.

There are already some weapons in use that Sharkey says you called “call autonomous”. The Iron Dome in Israel, which autonomously shoots down missiles, is one example.

He also says that military vessels have autonomous defence systems that they can switch on if being swarmed by an air attack.

Crucially, in both examples there is always a human present that can turn them off again.

The three main reasons why a killer robot ban is needed

According to Sharkey, there are many reasons why a killer robot ban is necessary. Here are his three main arguments.

Morality: “The idea of delegating a decision to kill to a machine is against human dignity. And there’s a lot of moral philosophers talking about it.”

Technology: “My biggest concerns are the ability to discriminate between civilians and the military.

“No matter what people tell you about what’s happening in the labs, we’re nowhere near the ability to discriminate between civilians and other targets.

“Not in a real-life situation. Not in the fog of war.”

Principle of proportionality: “We’ve got the principle of proportionality, one of the cornerstones of the laws of war.

“And the principle of proportionality essentially means that – paraphrasing – you can kill civilians or damage civilian property, providing it’s of direct, concrete military advantage. So it’s a balancing game.

“And only a human can make that decision, it’s not quantifiable.”

Slow progress

In 2009 Sharkey founded the International Committee for Robot Arm Control (ICRAC), which is now part of the steering committee for the Campaign to Stop Killer Robots.

ICRAC’s goal was to get an international discussion going. The movement has grown dramatically, with 26 countries having since joined the call for a ban.

“Now they’re beginning to come forward, we’re hearing what they have to say,” says Sharkey.

“What they’re all mostly saying is ‘we really have to ensure human control is somewhere there.”

Momentum has been picking up away from the UN, too. In July, thousands of leading AI scientists, including Tesla’s Elon Musk, Google DeepMind’s Demis Hassabis, and professor of AI Toby Walsh, signed a declaration to never develop killer robots.

If no meaningful action is taken and there is no global agreement, though, Sharkey believes it will be “awful”.

“We’ve seen so often, in Egypt or Russia, when there’s a revolution the soldiers refusing to fire.

“Well now you’ve got a bunch of programmers in a room who are forced to programme them that way and you send out the robots.

“So we’ve really got to push really hard, for instance at the UN human rights council, to try and get laws so that these are banned in all circumstances.

AI as a force for good

While AI is often reported negatively – from killer robots to job stealers – there are plenty of ways that AI is being used for the benefit of humanity.

“As anyone in the campaign to stop killer robots would tell you, we’re only against the critical functions of applying violent force targetting by AI systems,” says Sharkey.

“But autonomous systems themselves, we’ve nothing against them.”

He believes that they can provide value in humanitarian crisis, such as flying in supplies and locating people.

They can also be used to help combat global warming, with autonomous submarines measuring melting ice caps from below the glaciers.

In military terms, though, Sharkey would most like to see autonomous technologies used in bomb disposal robots. These could be sent ahead of troops on the grounds and find EOD explosives improvised.

“Because that’s what’s killing most of our young men, and I’d rather billions of dollars were sunk into a project like that than developing one more horrible, offensive weapon that’s bad for mankind.”

Want to see more video content from Verdict? Why not subscribe to our YouTube channel.