Remote Attack and the Law

Written by

Dr William BOOTHBY Dr Bill Boothby, the former Deputy Director of Legal Services for the Royal Air Force, published through OUP his doctoral thesis on Weapons and the Law of Armed Conflict in 2009; he has now published his second book, again through OUP, on The Law of Targeting.

This post looks at three modern forms of distance attack, by autonomous unmanned platforms, by cyber means and in outer space, and asks whether they challenge, or are challenged by, contemporary law.  It concludes that in any challenge the law is likely to prevail, and suggests the extent to which, and conditions on which, such novel and increasingly controversial technologies may indeed prove to be legally compliant.

On 29 November 2011, The Guardian, discussing US drone strikes in Pakistan, asserted that the US military makes deadly mistakes all the time.  Al Jazeera has reported that during the period May 2011 to March 2012 about 500 people, many of them civilians, were killed in US drone strikes to push Al Qaeda from the Arabian Peninsula.  And yet, CNN recently reported New America Foundation research showing a markedly reduced civilian proportion of casualties in US drone strikes in Pakistan (from about 50 percent in 2008 to close to zero) which the researchers attribute inter alia to a presidential directive to tighten up target selection, the use of smaller munitions, longer linger periods over targets and congressional oversight.

So is new technology challenging the law, or is it the other way round?

There is nothing new about the idea of fighting at a distance.  The heroic Homeric tradition of the phalanx and of the hoplite fighting at close quarters was already in ancient Greek times called into question by the use of the bow, artillery and catapults, and the process of remote attack has continued to develop in succeeding centuries and millennia, spurred on by the evident military advantage such methods yield. But the Homeric objections persisted, for example during the Kosovo conflict in the form of objections to the NATO 15,000 feet bombardment policy.

And yet since World War II, the capacity to deliver ordnance from the air with precision has developed apace –the statistics are startling in terms of the reduced number of sorties required to get a bomb delivered from high altitude to within a given distance of a hypothetical target. So, and forgive a degree of over-simplification, the lay assumption that the closer the pilot is to the target the better has been trumped by technological innovation.

Is there anything qualitatively different about future developments in the realm of remote attack?

Let’s start with unmanned platform attack.  If the US targeting of Qaed Senyan al-Harthi in Yemen in 2002 (see A Dworkin, The Yemen Strike, 14 November 2002)  was the start of the modern era in such operations, weaponisation of unmanned platforms is now mainstream.  The sensor technology mounted on the platforms enables compliance with precautions obligations contained in article 57 of 1977 Additional Protocol I (API).  The ‘man in the loop’, though frequently operating at a very considerable distance from the theatre of operations, let alone from the target, nevertheless is fed timely information similar in nature and quality to that fed to a pilot of a manned aircraft approaching the target.  For the states that employ such technology, any ethical reservations are clearly not ‘show stoppers’.  But, importantly, currently understood targeting law is the yardstick against which such operations are undertaken and subsequently evaluated.

The march of technological development leads inexorably towards autonomy in attack.  That would involve computerized weapons control systems applying algorithm-based technology to identify a target as being a military objective and the machine then making the decision to attack the target.  Constraining the area and time of sensor search and applying rigorous criteria that must be satisfied before the machine will ‘recognise’ the military objective makes it feasible for the article 57 obligation to verify the target as a military objective to be met for certain types of target.  More problematic is the article 57 obligation to make evaluative judgments as to the proportionality of the attack and as to whether the chosen target minimizes collateral dangers while obtaining a given military advantage.  In some contexts, for example in the case of attack in unpopulated deserts or rarely used areas of ocean,  the collateral risks can be pre-determined reliably as being low.  The military advantage in destroying the sort of military items that the machine will recognize will be known in advance, so the proportionality of such an attack can be considered at the sortie planning stage.

But urban attack or anti-personnel attack using such technology remain practically and legally problematic. Only if technology enables, for example, a machine to distinguish reliably between an able-bodied member of the armed forces and a civilian, a soldier who is wounded and cannot defend himself or a surrendering soldier on the other do such attack methods become legally feasible.

Somewhat obviously, if the only autonomous systems actually to be fielded are those that are all capable of satisfying these principles and rules and if technologies that fail in that regard do not reach the battlefield, law will have circumscribed technology.  If however procurement decisions are made based on arguments that the new technology somehow renders the precautions in article 57 unnecessary, we can take it that technology has won that battle.  But the process of legal review under article 36 of API addresses lawfulness of new weapons technologies by reference to current legal norms.

The second area of potential conflict between technology and the law that I want to consider is cyber warfare.  Clearly, the forthcoming Tallinn Manual on the Law of Cyber Warfare will clarify many of the questions that have been raised in the literature in recent years.  From the remote attack perspective, cyber takes the process a step further of course.  Not only is the cyber warrior who initiates a cyber attack quite possibly doing so from a remote location.  He will likely also be employing deception techniques which make the attack appear to have come from some source other than the actual attacker.  Does this additional aspect of remoteness makes a legal difference?

There is no international law requirement that an attacker advertise in advance who or where he or she is, or what type, duration, weight or nature of attack if any is planned.  Deception is an acknowledged technique in warfare that remains lawful provided it does not amount to perfidy or the misuse of other indicia as specifically prohibited by the LOAC.

To comply with existing law, cyber planners and decision makers must have enough information about the cyber linkages between the sending computer and the targeted computer to be able to assess whether the attack will engage the intended target.  They will also need to know enough about the characteristics of the particular cyber attack capability they are using to be satisfied that it is not indiscriminate by nature.  Thirdly, they will need to know enough about the targeted computer system, its dependencies and associated networks, and any other networks expected to be affected by the attack to be able to assess the proportionality of the planned attack.

Mapping the targeted system, its dependencies and the intervening linkages in this way is likely to be a challenging task.  Undertaking that mapping in a covert way is likely to be even more difficult.  To maintain operational security by failing to undertake any assessment of the proportionality of the planned attack is likely to render the ensuing attack unlawful, particularly if it in fact breaches the proportionality rule.  So while technology may render implementation of the required precautions technically challenging, the law will require that the rules be obeyed.

Perhaps the challenge for commanders in the cyber warfare field will be to understand what their cyber staffs are up to and the expected consequences.  But the remoteness, usually, of the cyber warrior from the scene of a cyber attack and the likely use of deception techniques do not per se challenge the application of distinction, discrimination and precautions rules as for any other kind of attack.

So what about outer space?

Remoteness is an inherent aspect to the conduct of military operations in outer space and there are a number of legal issues that arise. On 11 January 2007 the Chinese targeted and destroyed their Fengyun-IC weather satellite by using a SC-19 interceptor to collide head on and at high speed at an altitude of 860 km (see D A Koplow, ASAT-isfaction: Customary International Law and the Regulation of Anti-Satellite Weapons, 30 Mich J Int’l L 1187 (2008-9) at 1211). This test reportedly created 2,600 pieces of trackable debris and perhaps 150,000 smaller but hazardous fragments all in a swarm ranging from 200 to 2,300 km in altitude. Reportedly an item of debris 1cm in size travelling at the huge orbital velocities, namely up to 30,000 km per hour, can generate an impact equivalent to a one ton safe falling from a five storey building.  So, from this we can deduce that kinetic attack of an enemy satellite in high orbit is liable to create a cloud of debris that will pose dangers for other space users for as long as the debris remains (which is liable to be a very long time).  This immediately raises concerns under the principles of distinction and discrimination.

If the international community will find such effective denials of the use of parts of outer space increasingly unacceptable, military attack in outer space will be required to generate less debris.  Laser technology or cyber methods may provide some of the answer here.  If state practice develops in this way, states will be adjusting their behavior in order to comply with established legal principles of distinction and discrimination.

So the remoteness element in these methods of attack does not of itself render the attack unlawful.  It is the effect that this remoteness has on the ability of planners and decision makers to undertake required precautions and to obtain information to support a sensible evaluation of the lawfulness of the planned attack that lies at the root of the problem.

Let me now say a few words about liability for error in remote attack

When something goes wrong, responsibility can arise at the political, media, international law and domestic law levels and may take the form of individual, including command, or national responsibility.

Early media reports may be based on flawed information, speculation and assumption and may fix a public perception of responsibility which may be hard later to dispel by more reliable data.  That implies the need for early disclosure by governments of factual data, including imagery, to inform the public perception in a timely way.

Judgments after the event must always be based on the information that was available when the decision was made – for example for a UAV controller the vital issue will be whether the decision to attack was reasonable in the circumstances as they were presented to him. So if the relevant equipment was operating properly, the focus for potential blame will tend to shift to the operator of the platform whereas if the attack decision was attributable to faulty data feeds, for example due to enemy cyber action, the system failure is likely to exonerate the controller from responsibility for the attack.

There is no war crime of failing to take precautions in attack, so war crimes issues only arise when there has, for example, been direct targeting of civilians or civilian objects undertaken with intent and knowledge, or a decision, also with intent and knowledge, to undertake an attack that may be expected to cause clearly excessive collateral damage in relation to the overall anticipated military advantage.

Simple error is not the basis of a war crime and there is no liability at law for armed forces action that lawfully causes death, injury, damage or destruction to an opposing party to the conflict.  The fact that civilians are killed or injured in a military attack, whether employing UAV, cyber or space-based platforms, does not render the attack unlawful.  The law recognizes that civilian deaths and injuries are foreseeable as a result of military operations, and the obligation is on the parties to the conflict to do what they can to minimize them.  That rule also applies in relation to remote attack operations.

Liability to compensate arises under article 3 of Hague Convention IV, 1907 and article 91 of AP1.  If a remote attack operation is undertaken without taking the precautions required by article 57 and if civilians or civilian objects are as a result the object of the attack or if the attack breaches the proportionality rule, legal liability to compensate might arise if the case so demands.

Military personnel whose negligence causes attacks to target civilians, for example, will be subject to their military discipline code, in the case of the UK the Armed Forces Act 2006.  If a civilian’s negligence has a similar effect, a charge of gross negligence manslaughter would require fairly extreme circumstances and disciplinary action would likely be governed by the individual’s employment contract.  But civilians lack combatant immunity, so direct involvement by them in remote attacks will render them liable to prosecution, whether or not they comply with the law of armed conflict.  If the enemy takes them prisoner, they will not have prisoner of war status and they will likely be tried for their violent acts.  So in remote attack the challenge for States is to ensure that during armed conflicts, the trigger pullers are in the armed forces.

Imagine that a platform autonomously decides to make civilians or civilian objects the object of attack.  Would that prima facie constitute a breach of articles 51(2) and 52(1) of AP1 and thus a violation of article 91?  In deciding whether the case demands that compensation be paid, we need to take into account the design of the controlling software, the data fed into the mission control equipment, the settings applied to the algorithm-based technology, and any other information which would demonstrate what the persons planning and commanding the mission intended the machine should attack.  According to this view, the ‘object’ of an autonomous attack consists of the object(s) and/or person(s) that the target recognition equipment was programmed to engage.  Perhaps therefore liability to compensate will only be established under article 91 if it can be shown that those planners and commanders had as their object of attack the prohibited persons or objects.

But what happens if the designers of the software, or of the guidance systems, were responsible for the erroneous attack?

If an individual were deliberately to configure the autonomous target acquisition software with the intention that the platform would target civilians and/or civilian objects, that would amount to a war crime just as using non-autonomous capabilities with a similar intent and outcome would be.  If the software developer is negligent, my guess is that the focus will shift to whether the platform was adequately tested by the procuring state before being fielded.

My conclusion is that the established principles and rules as to distinction, discrimination, precautions and liability in attack are just as applicable to modern and foreseeable remote attack methods as they are to more traditional targeting techniques.  The challenge probably lies in obtaining the information required to support a reliable targeting decision and, after the event, in identifying where exactly any failure occurred and why.

.

Print Friendly, PDF & Email

Tags

No tags available

Leave a Comment

Comments for this post are closed

Comments