magnify
Home Articles posted by Heko Scheltema

Lethal Automated Robotic Systems and Automation Bias

Published on June 11, 2015        Author: 

Lethal Autonomous Robotic Systems (LARS) are machines that are capable of initiating a lethal attack on individuals or other targets. Based on its programming, a LARS can determine whether an individual is a valid target and whether engaging that target is a proportional action, and act upon its own assessment. Such sophisticated systems have long been in the realm of science fiction, but today they are not only a possibility, but a reality. For example, Samsung has developed the SGR-A1, which is currently deployed in the Korean demilitarised zone. Although, for now, that device leaves the final decision to engage to a human.

The debate on the use of such systems is heating up (see for instance the various reports by Human Rights Watch, the Oxford Martin Policy Paper, or discussions on the topic in relation to the CCW). These systems have been criticised from moral, political and legal perspectives. Leaving aside the moral and political objections, the development of a LARS is extremely problematic from the perspective of international humanitarian law. In particular, questions have been raised about the ability of such systems to make distinctions between civilians and combatants, as well as computing the proportionality of an attack. Furthermore, there are complex responsibility questions that are as yet not fully answered.

In response to these problems, the US has issued a directive that all robotic systems of this type will in fact not be operated in a fully autonomous mode, but will always function with a ‘human in the loop’. This statement is apparently intended to undermine at least the legal, and possibly the other criticisms relating to the deployment of LARS.

Human in the loop

It could be argued, however, that the deployment of a LARS with a human in the loop is just as problematic as a fully automated version. While the decision to engage a target will always be overseen by a human being, I will argue that it is not a given that this will in fact influence the functioning of the system sufficiently to adequately safeguard against the problems associated with the fully automated settings.

Firstly, the term ‘human in the loop’ is not very specific. There are a variety of ways in which a system can operate with a human in the loop. Read the rest of this entry…