magnify
Home Posts tagged "autonomous weapons"

What Level of Human Control Over Autonomous Weapon Systems is Required by International Law?

Published on May 17, 2018        Author: 
Facebook
GOOGLE
https://www.ejiltalk.org/tag/autonomous-weapons
LINKEDIN

Introduction

Autonomous weapon systems [AWS] raise profound legal, ethical and moral concerns. Scholars have asked, for example, whether AWS can comply with international humanitarian law [IHL]; whether their use will lower the threshold on the use of force and undermine jus ad bellum rules and whether their deployment will create an accountability gap in violation of victims’ rights to remedy. While there is no agreed definition of AWS, the United Kingdom House of Lords’ recent report carries definitions that generally describe AWS as robots that, once activated, are able to make targeting decisions without further human intervention.

In the recent United Nations Group of Governmental Experts [GGE] meeting [9-13 April] on Lethal Autonomous Weapon Systems, States reiterated the need to maintain human control over AWS. Notwithstanding the general consensus on maintaining human control over AWS, there is no agreement on the nature of that human control or how it should be defined.

Issues surrounding the concept of human control

The 2018 GGE meeting brought to fore a number of questions on how human control should be defined. States submitted a number of ideas and suggestions. Organisations like the International Committee of the Red Cross noted both legal and ethical reasons why human control must be maintained. Likewise, the International Panel on the Regulation of Autonomous Weapons discussed military and philosophical perspectives on the notion of human control.

Now that various disciplines – e.g. military, law, ethics, religion, philosophy etc. – have standards that are relevant to the notion of human control over AWS, the paramount question is which standard(s) should determine an acceptable level of human control and why? While States and scholars may cite innovative ideas and standards upon which to define the concept of human control, it is paramount to distinguish between relevant standards and those that are obligatory or legally binding upon States. The later ought to serve as the yardstick. Read the rest of this entry…

 

A “Compliance-Based” Approach to Autonomous Weapon Systems

Published on December 1, 2017        Author:  and
Facebook
GOOGLE
https://www.ejiltalk.org/tag/autonomous-weapons
LINKEDIN

A Group of Governmental Experts (GGE) on the topic of Lethal Autonomous Weapons (LAWS) concluded its first meeting in Geneva on 17 November 2017. The meeting was held under the auspices of the Convention on Certain Conventional Weapons (CCW) and built upon on three informal meetings of experts held between 2014 and 2016 (for reports of those meetings, see here). In December 2016, the Fifth Review Conference of the High Contracting Parties of the CCW had tasked the GGE “to explore and agree on possible recommendations on options related to emerging technologies in the area of LAWS” (see Decision 1 here and the agreed recommendations contained in this report).

At the heart of the debate is the question how States should respond to the emergence of such weapons. While some highlight legal, ethical or moral concerns of delegating life and death decisions to machines and advocate for a preventive prohibition of autonomous weapons systems, others pinpoint potential benefits for the way wars are fought in the future and deem any policy options, including regulation, to be premature.

As often in such multilateral discussions, it is hard to make progress and to get all States to agree on a common approach. The topic of autonomous weapon systems is no different. Indeed, perhaps it is particularly difficult because we do not yet fully understand what robotics and artificial intelligence truly harbor for the future of warfare, and for humanity in general. In an initial step, the GGE in its first session affirmed that international humanitarian law (IHL) applies to all weapons, including the potential development and use of autonomous weapon systems, and that responsibility for their deployment remains with States (see report here). This is a welcome step but obviously cannot be understood to exhaust the topic.

In an effort to generate momentum and identify common denominators, Switzerland presented a working paper at the beginning of the GGE, in which it is argued that ensuring compliance with international law, notably IHL, could and should be common ground among States and that this could form a constructive basis for further work. Accordingly, it should, at least as one element, be central to discussions of the GGE about autonomous weapon systems and should figure prominently in the report of the GGE as well as in the way forward. In the following, we recapitulate requirements for compliance with IHL and on that basis identify elements for a “compliance-based” approach aimed at advancing the debate within the CCW in an inclusive and constructive manner. Read the rest of this entry…

 
Comments Off on A “Compliance-Based” Approach to Autonomous Weapon Systems

“Are you smarter than Professor Hawking?” Higher Forces and Gut-Feelings in the Debate on Lethal Autonomous Weapons Systems

Published on April 27, 2016        Author: 
Facebook
GOOGLE
https://www.ejiltalk.org/tag/autonomous-weapons
LINKEDIN

“Professor Hawking says that artificial intelligence without control may cause the extinction of the human race”, noted a Chinese delegate following a session on ‘mapping autonomy’ at the Convention on Conventional Weapons (CCW) meeting of experts which took place from 11-15 April 2016 at the United Nations in Geneva. The CCW convened its third meeting of experts to continue discussions on questions related to emerging technologies in the area of lethal autonomous weapons systems (LAWS) and I had the privilege of participating.

LAWS are most often described as weapons that are capable of selecting and attacking targets without human intervention; one of the key questions addressed at the meeting was what exactly this means. According to most of the commentators present at the meeting, LAWS do not yet exist however, the possibility of using autonomous weapons in targeting decisions raises multidisciplinary questions that touch upon moral and ethical, legal, policy, security and technical issues. The meeting addressed all of these, starting with the technical session aimed at mapping autonomy.

Without expressing their position on a ban, the six technical experts on the panel presented a nuanced view of the state of current autonomous weapons technology and the road that lies ahead. The Chinese were one of the first delegations to respond to the panel and the delegate seemed startled; some of what was said seemed to contradict the conclusions reached by Professor Hawking et al. China read the Open Letter issued by the Future of Life Institute (FLI) and signed by thousands of artificial intelligence (AI) and robotics researchers, as well as by a number of other endorsers including the well-known Professor Stephen Hawking. The Open Letter calls for a ban on offensive autonomous weapons beyond meaningful human control, claiming that these weapons would be feasible within years, not decades. The Open Letter attracted a good deal of attention, largely because it is signed by a number of well-regarded figures including, Tesla CEO Elon Musk, Apple co-founder Steve Wozniak and as previously mentioned, Professor Stephen Hawking.

The expert panelists offered some divergent views on the claims and predictions made in the Open Letter. In response to these, China asked the panelists “do you think you are smarter than Professor Hawking?” A number of delegates, academics, NGO members and panelists seemed quite amused by the provocative question posed by China. Who dares to disagree with Hawking? Fortunately, some of the experts did. “Isn’t Hawking a physicist, and not an AI expert?”, asked one panelist. Another expert confidently said, “Yes, I am smarter than Stephen Hawking.” Why? “Because, like Plato, I know that I do not know.” The debate is amusing, but also a little bit troublesome. What is the effect of well-regarded figures on the discourse about autonomous weapon systems? Read the rest of this entry…

Filed under: Arms Control, EJIL Analysis