Hundreds of AI-related companies and thousands of AI experts, including Google‘s DeepMind and Elon Musk, have signed a pledge promising to not develop so-called killer robots.

The agreement, organised by the Future of Life Institute (FLI), was announced at the annual International Joint Conference on Artificial Intelligence.

It asks signatories to “neither participate in nor support the development, manufacture, trade, or use of lethal autonomous weapons.”

Lethal autonomous weapons systems are weapons that can identify, target and kill a person without a human “in-the-loop” to make the final decision.

Fully autonomous weapons do not yet exist, but with AI technology advancing rapidly, many industry experts have long called for pre-emptive regulations.

The US, UK, China and Russia are all developing military equipment that has some degree of automation. That does not include drones, which are under human control.

How well do you really know your competitors?

Access the most comprehensive Company Profiles on the market, powered by GlobalData. Save hours of research. Gain competitive edge.

Company Profile – free sample

Thank you!

Your download email will arrive shortly

Not ready to buy yet? Download a free sample

We are confident about the unique quality of our Company Profiles. However, we want you to make the most beneficial decision for your business, so we offer a free sample that you can download by submitting the below form

By GlobalData
Visit our Privacy Policy for more information about our services, how we may use, process and share your personal data, including information of your rights in respect of your personal data and how you can unsubscribe from future marketing communications. Our services are intended for corporate subscribers and you warrant that the email address submitted is your corporate email address.

However, there are fears that without international agreement, drones could one day become fully autonomous.

Founder and CTO of both Clearpath Robotics and OTTO Motors Ryan Gariepy warned that “the proliferation of lethal autonomous weapon systems remains a clear and present danger to the citizens of every country in the world”.

“No nation will be safe, no matter how powerful.”

A global consensus on killer robots

The European Association for AI, University of College London and the Swedish AI Society are among the 160 organisations that signed the agreement that spanned 36 countries.

Prominent AI leaders such as Demis Hassabis, Yoshua Bengio and Toby Walsh are among the 2,400 individuals that put their name to the pledge.

Tesla CEO Elon Musk and Mustafa Suleyman of DeepMind have previously spoken out against killer robots, signing an open letter in August 2017 calling for a ban on autonomous weapons.

In April the UK’s House of Lords Select Committee heard evidence from AI experts that found the definition of autonomous weapons varied significantly between countries and organisations.

And in the same month, the UN met to discuss how to regulate the production of killer robots.

Out of the 123 member countries, just 26 announced support for some type of ban.

President of the FLI Max Tegmark said that he was excited to see AI leaders take action where politicians have been slow to act.

“AI has huge potential to help the world – if we stigmatise and prevent its abuse,” he said.

“AI weapons that autonomously decide to kill people are as disgusting and destabilising as bioweapons, and should be dealt with in the same way.”

Machines should not decide who lives and who dies

Co-founder and chair of the International Committee for Robot Arms Control Noel Sharkey told Verdict that he welcomed the pledge.

“Seeing so many companies step up to the mark and declare their rejection of autonomous weapons systems (AWS) will help greatly with negotiations at the United Nations,” said the professor of AI and robotics at the University of Sheffield.

“Already, 26 states, including Austria and China, are supporting the Campaign to Stop Killer Robots’ call for a new, legally binding instrument to prohibit the development, production and use of AWS. Others are teetering on the edge of joining and calls from our best tech companies will hopefully push them over the edge.”

Advocates of an international ban on lethal autonomous weapons, such as the Campaign to Stop Killer Robots (a coalition of 75 international NGOs), draw attention to the risks posed by autonomous weapons.

They argue that such weapons will be difficult to control, easier to hack and are more likely to end up on the black market.

“We cannot hand over the decision as to who lives and who dies to machines,” said a key organiser of the pledge Toby Walsh, professor of AI at the University of New South Wales.

“They do not have the ethics to do so. I encourage you and your organisations to pledge to ensure that war does not become more terrible in this way,”

The UN will hold its next meeting on lethal autonomous weapons systems in August.

Signatories of the pledge hope their commitment will encourage lawmakers to develop an international agreement to prevent autonomous killer robots from becoming a reality:

“We, the undersigned, call upon governments and government leaders to create a future with strong international norms, regulations and laws against lethal autonomous weapons. […] We ask that technology companies and organizations, as well as leaders, policymakers, and other individuals, join us in this pledge.”