To address the risks that the rapid advancement of AI poses to international peace and security, the UN Security Council (UNSC) convened on 18 July 2023, to discuss the governance structures needed to mitigate the technology’s risk to peace.

With the US, UK, China, and Russia at the centre of these discussions, it will be necessary that they put aside their differences to achieve this.

Led by the UK, which holds the current UNSC presidency and is seeking to lead on global AI regulation, this was the first formal session the council held on the subject. Member states did not reach any concrete action but agreed on the risks that AI poses to humanity.

The need for international governance of AI

There are hopes that AI can help humanity better achieve peace and security (see the following Verdict article on how AI could assist conflict resolution efforts) and the UNSC session referred to how the UN has already utilised it to monitor the approval of peacekeeping policies, track ceasefires, identify trends of violence, and improve peacekeeping operations globally. However, to take full advantage of these opportunities, international regulations are needed to prevent the misuse of AI technology.

AI transcends national borders, the threats it poses are international, and nations cannot address them alone. It needs to be governed as a global common, with states and non-state actors adhering to the same standards, trusting all actors will use it safely.

This approach could also improve geopolitical relations between states competing in AI development. For example, the US and China would be reassured that they are adhering to common limits placed on the use of AI and lessen the threat that it could be used against one another to shift the balance of power in their favour. For example, the US and the Soviet Union cooperated in the 1970s and 1980s for nuclear arms control.

How well do you really know your competitors?

Access the most comprehensive Company Profiles on the market, powered by GlobalData. Save hours of research. Gain competitive edge.

Company Profile – free sample

Thank you!

Your download email will arrive shortly

Not ready to buy yet? Download a free sample

We are confident about the unique quality of our Company Profiles. However, we want you to make the most beneficial decision for your business, so we offer a free sample that you can download by submitting the below form

By GlobalData

Regulating AI in the same way as nuclear weapons

The UNSC compared the existential threat of AI to humanity to nuclear weapons and called for the technology to be regulated in the same way. Sam Altman (CEO of OpenAI) has also previously called for the establishment of an international AI safety organization, similar to the International Atomic Energy Agency (IAEA). The connection between AI and nuclear also goes beyond the urgency for regulation, there are concerns that AI could assist in nuclear warfare.

Despite the existential threats they pose and the fear of their combined use, there are questions on whether AI can be governed in the same way as nuclear weapons, or other weapons of mass destruction, and that we may need updated AI-specific regulation. The UK Foreign Secretary argued we have existing legal structures, such as international human rights law, to address concerns around autonomous weapons systems, but that international co-operation is needed to ensure they are appropriate.

Challenges to collaboration

While we can hope the universal nature of the threat will force disputing states to come together, geopolitical tensions could still interfere. At the UNSC, China said that countries should be able to establish their own AI regulations and that rules should reflect the views of developing nations, not the West. This argument follows the US spearheading its own efforts, with the White House working on an AI Bill of Rights and the release of a proposed Political Declaration on Responsible Military Use of AI and Autonomy.

Even though the US has emphasized the need for global cooperation, its independent actions could risk alienating other states sceptical of a Western narrative being pushed. It also comes at a time of China-Western tensions in the technology space, which has already played out in international governance in this field, such as the standard setting for information technologies at the UN International Telecommunication Union.

Finally, it is worth noting that the involvement of leading AI companies will be vital in the governance discussion. To be effective, an AI governance framework must also have strong accountability mechanisms to deter potential misuse.

Ultimately, the UNSC session highlights the severe risk that AI poses to global peace and the urgency to establish governance mechanisms to prevent its misuse. We can only hope the threat will galvanize states to overcome geopolitical tensions and mutually establish measures adequate to regulate AI for security and peace.