Agenda item 5(a):
An exploration of the potential challenges posed by emerging technologies in the area of LAWS to International Humanitarian Law
Delivered by Charlie Trumbull
1st session of the 2021 Group of Governmental Experts (GGE) on emerging technologies in the area of lethal autonomous weapons systems (LAWS)
August 4, 2021
Thank you, Mr. Chair.
With your indulgence, I would like to make an intervention on agenda item, 5(a).
Guiding Principle (a) reflects the foundational premise that IHL applies to all weapons systems, and the GGE’s 2019 report contains significant conclusions on IHL. Of course, much more work can be done on this agenda item. Understanding how IHL applies to the use of emerging technologies in the area of LAWS is critical to effectively implementing the other guiding principles, including guiding principles (b), (c), (d), (e), and (h).
Mr. Chair, the GGE should build on its successful work on IHL by further clarifying IHL requirements applicable to the use of emerging technologies in the area of LAWS. In particular, we have recommended that the GGE conclude that:
- Consistent with IHL, autonomous functions may be used to effectuate more accurately and reliably a commander or operator’s intent to strike a specific target or target group.
- The addition of autonomous functions, such as the automation of target selection and engagement, to weapon systems can make weapons more precise and accurate in striking military objectives by allowing weapons or munitions to “home in” on targets selected by a human operator.
- If the addition of autonomous functions to a weapon system makes it inherently indiscriminate, i.e., incapable of being used consistent with the principles of distinction and proportionality, then any use of that weapon system would be unlawful.
- The addition of autonomous functions to a weapon system can strengthen the implementation of IHL when these functions can be used to reduce the likelihood of harm to civilians and civilian objects.
A number of colleagues have highlighted similar points yesterday, so we would suggest these could be fruitful areas for consensus recommendations or conclusions.
We have also proposed that the GGE build on the 2019 report, which recognized the importance of precautions, by elaborating on the types of precautions that States have employed in using weapon systems with autonomous functions. On page 4 of our national commentary, we proposed the following:
Feasible precautions must be taken in use of weapon systems that autonomously select and engage targets to reduce the expected harm to civilians and civilian objects. Such precautions may include:
- Warnings (e.g., to potential civilian air traffic or notices to mariners);
- Monitoring the operation of the weapon system; and
- Activation or employment of self-destruct, self-deactivation, or self-neutralization mechanisms (e.g., use of rounds that self-destruct in flight or torpedoes that sink to the bottom if they miss their targets).
Reaching more granular understandings like this of IHL requirements would strengthen the normative and operational framework. For example, such understandings would improve the ability to conduct legal review of weapons, to train personnel to comply with IHL requirements, and to apply principles of State and individual responsibility.
We plan to discuss the issue of human-machine interaction in greater detail during the appropriate agenda item later this week, but let me just note that in our view IHL does not establish a requirement for “human control” as such. Rather, IHL seeks, inter alia, to ensure the use of weapons is consistent with the fundamental principles and requirements of distinction, proportionality, and precautions. Introducing new and vague requirements like that of human control could, we believe, confuse, rather than clarify, especially if these proposals are inconsistent with long-standing, accepted practice in using many common weapons systems with autonomous functions.
The application of IHL to the use of emerging technologies in the area of LAWS is a critical topic, and as we noted in our opening statement, we would recommend focusing on this as one of four aspects of the normative and operational framework that our mandate asks us to explore and develop. I also wanted to respond to some of the perspectives in the discussion yesterday.
First, we have heard a number of delegations question whether autonomous weapons could be capable of distinguishing between civilians and combatants, distinguishing between combatants and persons placed hors de combat, or making the assessments required by the principle of proportionality. Yet, this GGE has repeatedly recognized that IHL does not impose obligations on weapon systems, which as objects cannot assume an obligation. Accordingly, IHL does not require a weapon to distinguish between military objectives and persons and objects that are protected from being made the object of attack or to assess whether an attack is expected to cause excessive death or injury to civilians and damage to civilian objects. Rather, IHL imposes obligations on persons. It is the person who makes the decision governed by IHL, such as the military commander who directs the attack, who is obliged to meet the applicable IHL requirements. In this sense, we don’t view autonomy in weapons systems as replacing humans in making these judgments required by IHL. So, for example, the critical consideration is not whether a weapon system itself can distinguish between military objectives and persons and objects that are protected from being made the object of attack or whether a weapon system can make proportionality assessments similar to a human, but rather whether such weapons systems can, under the circumstances, be used by a human operator or commander consistent with IHL requirements.
Second, the ICRC and other delegations have noted that certain uses of autonomy could pose certain risks for compliance with IHL. We agree that virtually any advance in technology can present a risk if used in an irresponsible or unlawful manner. However, the fact that a technology poses risks does not mean that it should be prohibited. As the Ambassador from India and other speakers reminded us yesterday, technology can also be used to achieve benefits, like precision and reducing the risks of civilian casualties. We should not demonize technology. Rather, we should carefully weigh risks against benefits of emerging technologies in the specific circumstances, by, for example, using risks assessments and mitigation measures, as the GGE has recognized in Guiding principle (g). It provides:
- Risk assessments and mitigation measures should be part of the design, development, testing and deployment cycle of emerging technologies in any weapons systems;
Third, as the French Ambassador noted, the proposal to prohibit “unpredictable” weapons also warrants much greater discussion before the GGE could reach consensus on it. In line with the comments of the German Ambassador yesterday, it may be more useful to focus on practices for ensuring that a weapon system does what commanders or operators intend. Indeed, a core purpose of the U.S. Department of Defense’s Directive 3000.09 “Autonomy in Weapons Systems,” is to “[e]stablish[] guidelines designed to minimize the probability and consequences of failures in autonomous and semi-autonomous weapon systems that could lead to unintended engagements.” We have shared this practice in the interest of transparency and encourage other States to share their practice as well.
Lastly, we do not necessarily regard relying on autonomous functions in weapon systems as a delegation of life and death decisions to algorithms.
- Sensors and software are also used in many safety-critical applications, including in normal civilian life. Just because the functioning of software or algorithms can have consequences for human life does not mean that the software is a moral agent or that there is no human responsible for these consequences.
- In addition, using software to select and engage a target doesn’t necessarily mean that a person has not exercised appropriate levels of human judgment to ensure compliance with IHL. Autonomous functions have been used in weapon systems consistent with IHL for many years, including to select and engage targets. In existing practice, decisions about life and death have been made through military decision-making processes at the strategic, operational, and tactical levels, including the targeting process.
- Moreover, using software or autonomous functions does not mean that a person cannot be held accountable for wrongdoing. Established legal principles of accountability continue to apply when persons use emerging technologies in the area of LAWS. The U.S. delegation has proposed a number of conclusions relating to human responsibility to emphasize this point, which we plan to discuss in more detail later.