What role artificial intelligence could play in evaluating the compliance of military operations with international humanitarian law: The case study of the conduct of hostilities in Ukraine

Written by

On the 23rd of January 2024, the Russian Federation launched a “massive” airstrike on the cities of Kyiv and Kharkiv, firing 41 missiles. The air defences shot down 21 of these missiles. These strikes exemplify a pattern. On the 1st of January 2024, the Russian Federation launched one of the biggest air attacks on a number of Ukrainian cities, including Kyiv. Prior to this, on the 28th of May 2023 BBC News reported that Russian Federation launched 54 drones on the city of Kyiv overnight. The air defence systems shot down 52 of these drones. The Kyiv City Military Authority reported that the air alert for this attack lasted for more than 5 hours. As a result of this military operation, a number of buildings in a historical neighbourhood where a famous monastery is located caught fire. Individuals living in Kyiv, a city which was one of the most heavily bombed cities in Ukraine in 2023, reported suffering considerable psychological toll as a result of regular aerial bombardments. The president of the UK Air and Space Power Association Greg Bagwell believes that the purpose of the Russian strikes on Kyiv is more about symbolism than achieving military gains. In order to evaluate whether a particular military operation or operations on Kyiv consisting of air bombing breached the prohibition in Art 51(2) of the Protocol Additional to the Geneva Conventions of 12 August 1949, and relating to the Protection of Victims of International Armed Conflicts (Protocol I) 1977, known as API 1977, one needs to know what military objectives were in the area. Article 51(2) of AP I 1977 prohibits “acts or threats of violence the primary purpose of which is to spread terror among the civilian population.”

It is put forward that artificial intelligence (AI) can be a tool which members of the international legal community and civil society can use to supplement their own analysis regarding whether the military commander launched an attack or a series of attacks with the aim of spreading terror among the civilian population in a situation, such as the air bombardment of Kyiv. It is possible to use AI as a tool to aid the assessment of to what degree the conduct of a particular military operation deviates from how a “reasonable” commander (para 50) would have planned and carried out the military operation in question. AI can aid human decision-makers in this assessment. However, it is important that AI does not substitute for the traditional approach to evaluation. The international legal community and civil society can then use this information to put pressure on states to comply with international humanitarian law (IHL).

Context for exploring the use of AI as a tool for compliance

The efforts to harness AI as a tool for promoting compliance with IHL and accountability are ongoing. In 2017, researchers at Swansea University and a number of human rights groups began to collect potential sources of evidence relating to the commission of war crimes using social media posts. The rationale was that the investigator could combine videos and photos on social media with other evidence to arrive at a conclusion about whether someone had committed a war crime. If, for instance, there was a video showing a second strike killing the rescuers, if the exact point of the impact of the bomb could be determined and if satellite imagery of the area was available, then the investigator could determine whether the individual launching the attack intended to kill the rescuers. More recently, Adam Harvey used synthetic data to program AI to detect images containing evidence that the attacker had used a prohibited weapon. The combined developments of scraping the internet for digital evidence and the use of AI make it possible to scan many recordings much quicker than a human being would have to detect whether the recordings contain evidence of war crimes. CNA wrote about how one can harness AI to reduce harm to civilians. It proposes that military commanders can use AI to compare live imagery of the battlefield area with the imagery which the commander relied on in order to conduct an initial evaluation of what degree of harm to civilians the attack was likely to inflict. Thereafter, the AI can alert the commander and the person carrying out the attack if it detects that civilians entered the area in question. These are not the only possible applications of AI which can improve monitoring compliance with IHL. Since AI can use information in different formats as inputs, and since it has the capability to detect patterns in the data, it is possible to use AI as a tool to facilitate human beings in carrying out the analysis of how a “reasonable” commander is likely to have made a decision relating to the planning and conduct of a particular military operation or series of operations.

How one can use AI to aid the analysis of compliance of a military operation with IHL

The military operations which the Russian Air Force carried out in Kyiv illustrate how one could use AI as a tool to help the analysis to what degree the conduct of a particular military operation deviated from how a “reasonable” commander would have planned the military operation. Moreover, one could use AI to help determine whether launching successive intensive airstrikes at Ukrainian cities, such as Kyiv, amounts to committing acts of violence which have as their primary purpose to terrorise the civilian population contrary to Art 51(2) API 1977. To see why AI can be a tool for helping to assess the degree of compliance with the law in this regard, it is first necessary to consider the law on the issue.

The International Committee of the Red Cross concluded that the prohibition in Art 51(2) API 1977 has customary international law status. Switzerland’s Basic Military Manual states that “it is prohibited to commit acts of violence or to threaten violence with the primary aim of spreading terror among the civilian population” (Art 27(2) and commentary at 70 DB). The Special Rapporteur of the UN Commission on Human Rights observed in relation to the situation in the former Yugoslavia that the Serb forces used a tactic of terrorising the civilians by carrying out regular bombardment of cities (paras 17 and 20).

In evaluating compliance with Art 51(2) AP 1977, one may have regard to whether the attacker failed to comply with other norms of IHL. In the Preamble of Resolution 53/164 on the situation of human rights in Kosovo, the UN General Assembly expressed a grave concern about the systematic terrorisation of ethnic Albanians. The General Assembly inferred that this was the case due to various acts, including indiscriminate shelling. Another relevant prohibition relevant to the situation at hand can be found in Art 57(2)(a)(iii) API 1977 prohibiting launching an attack which may be expected to cause incidental harm to civilians which would be excessive in relation to the concrete and direct military advantage. This rule has customary international law status. The greater the degree to which the attack deviates from compliance with the principle of proportionality, the more likely it is that the commander intended to terrorise the civilian population. One could use AI to supplement human analysis by estimating to what degree the military operation matches how a “reasonable” commander would have estimated the harm to civilians.

When programming AI, one could feed many battlefield scenarios into AI, pre-specify how particular variables impact the accuracy of the strike and input information regarding how different factors impact the degree of harm to civilians that is likely to result. One could input information into AI, such as the type of weapon used, the weapon’s accuracy radius, the angle at which the munition engaged the object at the point of impact and the type of civilian infrastructure in the area. At the stage of evaluating a particular military operation, one could use information, such as satellite imagery and interviews with civilians in the area, as inputs. The AI could then calculate what degree of harm a “reasonable” commander could have anticipated to inflict by using computer modelling of the battlefield environment and by using data about the conduct of past military operations. The armed forces of various countries have been using software to model the likely degree of harm which an attack was likely to inflict on civilians (p 344). Moreover, one could estimate whether a “reasonable” commander would have used fewer drones, munitions, bombs or other means of warfare to destroy the military objectives in question using air strikes. One could then gather information using various sources about whether the attacker destroyed successfully any military objectives in the area. Subsequently, one could discuss whether, in the circumstances ruling at the time, the attack was likely to be disproportionate. If it was the case that the attacker used more weapons than was necessary to destroy the target on multiple occasions or launched multiple attacks in an area with no military objectives in the area or launched multiple disproportionate attacks, then either of these factors can point to the fact that the military commander launched the attack with the intention to terrorise the civilians.

It is necessary to use AI with care, to acknowledge its limitations and to employ AI as a tool rather than as a substitute for the traditional analysis. First, AI produces an output based on analysing information about past cases which it treats as being similar to the incident at hand (p 24; 107) rather than based on analysing information about a particular incident (p 105). This means that one should acknowledge the limitations of AI and use the outputs of AI with care, especially if one employs such outputs as a basis for charging someone with a war crime. Given the fact that it is possible to feed a lot of battlefield scenarios into AI, to use synthetic data, and to program AI to adjust its predictions based on variables, such as weather, the outputs of AI can put the international community on notice of the need for further inquiry. The advantage of the synthetic data to program AI is that it makes it possible to generate many possible permutations of particular scenarios, ranging from how many civilians are in the area to the impact of different weather conditions.

Second, AI outputs should always be treated as approximations and should supplement the traditional analysis by a human being. The application of the principle of proportionality involves the application of evaluative standards and the exercise of discretion (p 298).  As the ICTY Prosecutor acknowledged, while commanders will in “many cases” reach agreement over how a “reasonable commander” would have exercised discretion in the course of applying the principle of proportionality, commanders with “different doctrinal backgrounds, differing degrees of combat experience or national military histories” may disagree “in close cases” (para 50). It is important that in using AI one does not change the nature of the analysis which IHL requires by attaching mathematical value to human lives (p 268-269). Rather, a human being can use AI’s assessment about how many individuals the military operation was likely to harm to supplement the traditional analysis. While more research needs to be carried out in order to create an AI system described here, the present discussion shows that individuals could employ AI as a tool to help inform their analysis regarding what further information needs to be gathered to establish whether a military operation complied with IHL.

Print Friendly, PDF & Email

Leave a Comment

Your comment will be revised by the site if needed.

Comments