magnify
Home Posts tagged "Lethal Autonomous Weapons"

“Are you smarter than Professor Hawking?” Higher Forces and Gut-Feelings in the Debate on Lethal Autonomous Weapons Systems

Published on April 27, 2016        Author: 

“Professor Hawking says that artificial intelligence without control may cause the extinction of the human race”, noted a Chinese delegate following a session on ‘mapping autonomy’ at the Convention on Conventional Weapons (CCW) meeting of experts which took place from 11-15 April 2016 at the United Nations in Geneva. The CCW convened its third meeting of experts to continue discussions on questions related to emerging technologies in the area of lethal autonomous weapons systems (LAWS) and I had the privilege of participating.

LAWS are most often described as weapons that are capable of selecting and attacking targets without human intervention; one of the key questions addressed at the meeting was what exactly this means. According to most of the commentators present at the meeting, LAWS do not yet exist however, the possibility of using autonomous weapons in targeting decisions raises multidisciplinary questions that touch upon moral and ethical, legal, policy, security and technical issues. The meeting addressed all of these, starting with the technical session aimed at mapping autonomy.

Without expressing their position on a ban, the six technical experts on the panel presented a nuanced view of the state of current autonomous weapons technology and the road that lies ahead. The Chinese were one of the first delegations to respond to the panel and the delegate seemed startled; some of what was said seemed to contradict the conclusions reached by Professor Hawking et al. China read the Open Letter issued by the Future of Life Institute (FLI) and signed by thousands of artificial intelligence (AI) and robotics researchers, as well as by a number of other endorsers including the well-known Professor Stephen Hawking. The Open Letter calls for a ban on offensive autonomous weapons beyond meaningful human control, claiming that these weapons would be feasible within years, not decades. The Open Letter attracted a good deal of attention, largely because it is signed by a number of well-regarded figures including, Tesla CEO Elon Musk, Apple co-founder Steve Wozniak and as previously mentioned, Professor Stephen Hawking.

The expert panelists offered some divergent views on the claims and predictions made in the Open Letter. In response to these, China asked the panelists “do you think you are smarter than Professor Hawking?” A number of delegates, academics, NGO members and panelists seemed quite amused by the provocative question posed by China. Who dares to disagree with Hawking? Fortunately, some of the experts did. “Isn’t Hawking a physicist, and not an AI expert?”, asked one panelist. Another expert confidently said, “Yes, I am smarter than Stephen Hawking.” Why? “Because, like Plato, I know that I do not know.” The debate is amusing, but also a little bit troublesome. What is the effect of well-regarded figures on the discourse about autonomous weapon systems? Read the rest of this entry…

Filed under: Arms Control, EJIL Analysis
 

Lethal Automated Robotic Systems and Automation Bias

Published on June 11, 2015        Author: 

Lethal Autonomous Robotic Systems (LARS) are machines that are capable of initiating a lethal attack on individuals or other targets. Based on its programming, a LARS can determine whether an individual is a valid target and whether engaging that target is a proportional action, and act upon its own assessment. Such sophisticated systems have long been in the realm of science fiction, but today they are not only a possibility, but a reality. For example, Samsung has developed the SGR-A1, which is currently deployed in the Korean demilitarised zone. Although, for now, that device leaves the final decision to engage to a human.

The debate on the use of such systems is heating up (see for instance the various reports by Human Rights Watch, the Oxford Martin Policy Paper, or discussions on the topic in relation to the CCW). These systems have been criticised from moral, political and legal perspectives. Leaving aside the moral and political objections, the development of a LARS is extremely problematic from the perspective of international humanitarian law. In particular, questions have been raised about the ability of such systems to make distinctions between civilians and combatants, as well as computing the proportionality of an attack. Furthermore, there are complex responsibility questions that are as yet not fully answered.

In response to these problems, the US has issued a directive that all robotic systems of this type will in fact not be operated in a fully autonomous mode, but will always function with a ‘human in the loop’. This statement is apparently intended to undermine at least the legal, and possibly the other criticisms relating to the deployment of LARS.

Human in the loop

It could be argued, however, that the deployment of a LARS with a human in the loop is just as problematic as a fully automated version. While the decision to engage a target will always be overseen by a human being, I will argue that it is not a given that this will in fact influence the functioning of the system sufficiently to adequately safeguard against the problems associated with the fully automated settings.

Firstly, the term ‘human in the loop’ is not very specific. There are a variety of ways in which a system can operate with a human in the loop. Read the rest of this entry…