International limits on autonomous weapon systems are necessary to “preserve human control”, according to two international non-profit organisations.
The International Committee of the Red Cross and the Stockholm International Peace Research Institute have published a joint report examines the challenges posed by autonomous weapon systems, need for human control.
Autonomous weapon systems, sometimes referred to as killer robots, are programmed to search for and engage targets without human involvement.
Although there are differing opinions on what form regulation should take, there is a consensus that humans should retain some responsibility for autonomous weapon systems. The report examines how this can be applied in practice and guidance for policymakers on international rules, standards or best practices.
The United Nations’ Convention on Conventional Weapons has met in an attempt to regulate autonomous weapon systems, however, some governments, including Australia, Israel, Russia, the UK, and the US, opposed a ban.
The Campaign to Stop Killer Robots, a coalition of 160 non-governmental organisations, is campaigning to ban fully autonomous weapons in order to retain “meaningful human control over targeting and attack decisions”.
Concerns raised over autonomous weapon systems
Although autonomy will never completely displace humans from decision making, the report addresses concerns autonomous weapon systems create “more distance in time, space and understanding between human decisions to use force and the consequences. This distancing, and the unpredictability in consequences it brings, in turn raises concerns about the application of international humanitarian law, ethical acceptability and operational effectiveness.”
As autonomous weapon systems are triggered by the environment, rather than the user choosing a specific target, timing or location of the resulting application of force creates risks for civilians “as well as fundamental ethical concerns about the role of humans in life and death decisions”.
It proposes three possible measures to address the above challenges concerning the role of humans in life-and-death decisions, and proposes a combination of all three is needed to meet “legal, ethical and operational requirements”. Firstly, autonomous weapon systems should be subject to controls limiting the type of target and task they can be used for, as well as building in deactivation and fail-safe mechanisms.
Secondly, measures that control or structure the environment in which weapons are used are needed, for example only in environments where civilians are not present. Lastly, controls that allow the user to supervise and intervene in operations where necessary should be in place.
These controls are intended to “help reduce or at least compensate for the unpredictability inherent in the use of autonomous weapon systems and to mitigate the risks, in particular for civilians”.
The report concludes with five recommendations: states should focus their work on how human control applies in practice; measures for human control should inform internationally agreed limits on autonomous weapon systems; states should clarify where new rules; standards and best practice guidance may be needed; any new rules must build on existing limits under international humanitarian law and human control should be considered in the study, research and development, and acquisition of new weapon systems.