Flag

An official website of the United States government

U.S. Statement at the GGE on LAWS During the Discussion of Agenda Item 5(d)
18 MINUTE READ
August 5, 2021

“Reviewing Potential Military Applications of Emerging Technologies in the Areas of Lethal Autonomous Weapons Systems” (LAWS)

Group of Governmental Experts (GGE) on Emerging Technology in the Area of LAWS

Delivered by Michael Colin Sullivan

1st session of the 2021 Group of Governmental Experts (GGE)on emerging technologies in the area of lethal autonomous weapons systems (LAWS)

August 5, 2021

This presentation is based off a working paper the United States submitted in 2019, which is based on U.S. practice in the development and use of autonomous functions in weapon systems.

This presentation will address:

(1) application of IHL requirements in three general scenarios for the use of autonomous functions in weapon systems;

(2) steps that States can take to help implement IHL requirements; and

(3) the potential for emerging technologies in the area of LAWS to strengthen implementation of IHL.

We believe that the issues addressed in this presentation will support the GGE’s work on the legal, technological, and military aspects of emerging technologies in the area of LAWS, including its work on the normative and operational framework.  Even though we are making this presentation during agenda item 5(d), “Review of potential military applications of related technologies in the context of the Group’s work;” the issues addressed in this presentation are relevant to other agenda items, including agenda items 5(a), 5(b), and 5(c).

For example:

  • How IHL requirements apply can depend on how autonomous functions in weapon systems are used. As a result, we hope that the three general use scenarios I will discuss in my presentation, can inform work under agenda item 5(a), relating to IHL.
  • These three use scenarios can also inform our work under agenda item 5(b), relating to characteristics because one of the relevant characteristics that many delegations seem to have identified is the human element – how people are using the weapon system.
  • Similarly, how the weapon system is used can affect the appropriate modalities and practices for human-machine interaction, informing our work under agenda item 5(c).

Additionally, in discussions this week, some delegations have argued that the use of autonomous functions would necessarily result in a loss of control or even a delegation of decision-making to machines.  However, in our view, the appropriate use of autonomy can improve the degree of control that human beings exercise over the use of force.

Although a wide range of IHL requirements could be implicated by the use of autonomous weapon systems, this presentation focuses on IHL issues presented by the use of autonomous functions in weapon systems to select and engage targets.

The following IHL requirements are of particular relevance:

(a) Distinction.  In that, “Combatants may make military objectives the object of attack, but may not direct attacks against civilians, civilian objects, or other protected persons and objects.”

(b) Proportionality.  In that, “Combatants must refrain from attacks in which the expected loss of life or injury to civilians, and damage to civilian objects incidental to the attack, would be excessive in relation to the concrete and direct military advantage expected to be gained.”

(c) Precautions.  In that, “Combatants must take feasible precautions in planning and conducting attacks to reduce the risk of harm to civilians and other persons and objects protected from being made the object of attack.”

It may be useful to consider that:

  • these requirements impose duties on humans, and not on machines,
  • They are implemented in military operations through responsible commands, and
  • not every duty will be implemented by every human within the command.

We’d also note that these requirements address “attacks,” rather than the firing or activation of weapon systems as such.  For example, the single firing of a weapon system might only be one part of an “attack,” and the mere activation of a weapon system might not constitute an “attack” at all.

The first general scenario is fairly simple.  A weapon system’s autonomous function could be used to effectuate more accurately and reliably a commander or operator’s intent to strike a specific target or a specific target group.

For example, the High-Speed Anti-Radiation Missile, otherwise known as HARM or AGM-88, pictured here, is a tactical air-to-surface anti-radiation missile that has been in service since 1985.  Currently, HARM is actively managed in nine countries around the world that include users from NATO, as well as the Indo-Pacific, European, Middle East, and African regions.

The system is used in the following way.  The operator identifies an enemy surface-to-air missile system and fires a missile at it.  Rather than only being guided by the operator’s aiming of the missile at the target, the missile also has sensors and computers that provide it the capability to recognize enemy surface-to-air missile systems and, after being fired, the missile automatically identifies, acquires, and guides itself to the target that the operator intended to strike.

I would like to point out here that the weapon system is, after activation, selecting and engaging a target without the intervention of a human operator.  However, the human operator has chosen a specific target or target group before firing the weapon.  Thus, the weapon is classified as a “semi-autonomous weapon system” under U.S. military definitions.

In this scenario, the analysis of IHL requirements proceeds almost identically as in a case of the use of a so-called “dumb” weapon without the autonomous function because the weapon is being used no differently.  The addition of autonomous functions, however, is intended to enhance the effectiveness of the weapon system by making it more accurate and precise in striking military objectives.

In this kind of scenario, the addition of autonomous functions would render the use of the missile illegal if the autonomous functions actually made the missile inherently indiscriminate, that is, incapable of being used consistent with the principles of distinction and proportionality.  On the other hand, it would be reasonable and consistent with IHL to rely on the autonomous function in the missile to identify, acquire, and guide itself to the target or target group, to the degree that the autonomous function performed accurately and consistently in selecting and engaging the correct targets.  For example, if testing indicated that addition of the autonomous function to an already lawful missile system served only to improve its accuracy in striking military objectives, then it clearly would be appropriate to rely on the autonomous function.

This first scenario illustrates that if weapons systems with autonomy in targeting functions are used in the same way as weapon systems lacking such capabilities, they do not seem to present new issues of IHL compliance.  Moreover, when such a reliable autonomous function is available, the use of such a weapon could also be deemed a feasible precaution to reduce the risk of civilian casualties.

Second, a weapon system’s autonomous functions could inform a commander or operator’s decision-making about what targets he or she intends to strike.  For example, the computers and sensors on the weapon system could generate an assessment of a potential target that the operator would consider along with other relevant information, such as the operational context in which the weapon is deployed, in deciding whether to engage a target.  These types of computers and sensors also could, in principle, be distinct from the weapon system used to engage a target.  For example, counter-battery radar systems are used to identify the location from which incoming rockets, artillery, and mortars were launched.  This location can be used to direct-counter-battery fire by an artillery system.  These systems have been used for many decades.  Current examples include the U.S. AN/TPS-53 Counterfire Radar System that has been in use since 2010, as well as the Counter Battery Radar (otherwise known as COBRA) developed and placed in use in 2005 by other countries.

The United States discussed the AN/TPQ-53 Counterfire Radar System at the 2019 GGE, and that intervention is available as a link in our submission to the Chair on U.S. practice.

This scenario presents a general issue of when it is permissible or appropriate to rely on autonomous functions to aid in decision-making in armed conflict.

Armed conflict, and combat operations in particular, takes place in a difficult decision-making environment – often referred to as the “fog of war.”  The facts may be difficult to discern due to the efforts by the adversary to deceive, as well as the stress and chaos accompanying combat operations, including the constant threat of attack by the adversary.  Recognizing that information during armed conflict may be imperfect or lacking, commanders and other decision-makers must make a good faith assessment of the information that is available to them at the time when conducting military operations.

Whether it is permissible under IHL to rely on an automated assessment to inform decisions about whether a target is a military objective would depend on whether such reliance was consistent with the exercise of due regard to reduce the risk of incidental harm to the civilian population and other persons and objects that may not be made the object of attack.  This would depend, inter alia, on:

  • an understanding of how accurately and consistently the machine performs in not mischaracterizing civilian objects as military objectives;
  • the operator giving the automated assessment appropriate weight relative to other information that would be probative of whether the target, in fact, was a military objective; and
  • the urgency to make a decision.

For example, if the automated assessment has a very low rate of “false positives” in which it mischaracterizes civilian objects as military objectives, the operational context corroborated the automated assessment, and the context involved combat operations, then it would seem to be reasonable to rely on the assessment to conclude that the target was a military objective and, provided other IHL requirements were met, to strike the target.  On the other hand, if the system performed with a significant rate of “false positives,” there was no need for a rapid decision, and the contextual factors contradicted the automated assessment, then it would not seem to be reasonable for a person to rely on the automated assessment to conduct a strike immediately, rather than first seeking further information.

I want to highlight in this scenario the importance of considering human-machine interaction more broadly and not just “human control.”  In this scenario, there is full control over the weapon system, but it would be inappropriate if human beings negligently deferred to the automated assessments and did not exercise appropriate judgment in the circumstances.

Third, a weapon system’s autonomous function could be used by a commander or operator to select and engage specific targets that the commander or operator did not know of when he or she activated the weapon system.  For example, a commander might assess that there is a general risk of enemy missile or rocket attacks against a given location or against a given unit or platform, but the commander might not know of a specific incoming missile or rocket attack.  In order to protect that location, unit, or platform, the commander might direct the activation of a weapon system, such as an active protection system, that would select and engage incoming projectiles automatically if such an attack occurs.  One such example of this is the Counter –Rocket, Artillery, Mortar, or “C-RAM” Intercept Land-based Phalanx Weapon System, which detects and intercepts indirect fire.  It has been credited with hundreds of intercepts of rockets and mortars fired at friendly military units with no fratricides or civilian casualties.

In this scenario, although the commander or operator would not expressly intend to strike a specific target or target group when activating the weapon system, the weapon system could nonetheless be operated consistent with the IHL requirements of distinction, proportionality, and precaution.

The commander or operator could act consistently with the principle of distinction by having the intention of making potential targets constituting military objectives the object of attack and provided that the autonomous functions in the weapon system perform with sufficient reliability to ensure that force can be directed against such targets.

The commander or operator could act consistently with the principle of proportionality by assessing that the risk of civilian casualties from the activation of the weapon would not be excessive in relation to the military advantage expected to be gained.  An assessment of the risk of civilian casualties could be informed by a variety of factors, including any precautions taken to reduce that risk.  For example, if the weapon’s autonomous function performed accurately and reliably to fire only against incoming projectiles and used rounds that self-destructed in flight to reduce the potential for the round to cause harm if it missed the target, then the commander might reasonably be able to make such an assessment.  Warnings to potential civilian air traffic and monitoring the operation of the weapon system could also be important precautions to take that would reduce the risk of civilian casualties and ensure that the activation of the system would not be excessive.

I appreciate you listening to our presentation of these three use scenarios and how IHL applies to them.  Our 2019 working paper discusses these scenarios in detail, and we also have a paper submitted this year that provides concrete proposals for the GGE’s consideration on how IHL applies to these scenarios.

Considering these scenarios helps highlight the importance of practical measures to implement IHL requirements in respect of autonomous functions in weapon systems.  These practical measures include: 1) rigorous testing to assess system performance and reliability; 2) establishing doctrine, training, and procedures to ensure that weapons are used in accordance with how they have been designed, tested, and reviewed; and 3) the legal review of weapons prior to their use.

Rigorous testing to assess weapon system performance and reliability supports compliance with IHL.  For example, in the first scenario, if the autonomous function performs erratically, randomly engaging civilian objects rather than the intended military targets, then it raises concerns that the weapon would be prohibited as inherently indiscriminate.  Similarly, in the second scenario, the degree to which the autonomous function misclassifies civilian objects as military objectives would be a significant factor in whether it would be reasonable to rely on the autonomous function’s assessment in conducting a strike.  States have every incentive to develop reliable systems; rigorous testing of systems reflects a strong convergence of military and humanitarian interests.

Establishing doctrine, training, and procedures for the weapon system is another important mechanism that helps ensure that the weapon is used consistent with IHL.  How IHL considerations are applied could depend on how the weapon is to be used.  If a weapon system was developed with a particular concept of employment in mind (for example, scenario 1 with homing munitions.), it might create unanticipated problems if the weapon were used in ways not contemplated by those who developed and tested the weapon system.

The legal review of weapons prior to their use enables the State developing or acquiring the weapons to consider relevant IHL issues, including precautions to reduce the risk of civilian casualties.  The legal review of the weapon also affords an opportunity to ensure that designers and developers of the weapon system and others tasked with ensuring the reliability of the weapon system have applied their expertise.  Similarly, the legal review of the weapon also provides a mechanism for reviewing and considering additional doctrine, training, and procedures that would help ensure the weapon is used consistent with IHL.

One of the guiding principles for the GGE’s work is that “[c]onsideration should be given to the use of emerging technologies in the area of lethal autonomous weapons systems in upholding compliance with IHL and other applicable international legal obligations.”  Thus, it is important to consider how emerging technologies in the area of LAWS can strengthen implementation of IHL requirements.

As the United States has noted in its working paper on humanitarian benefits of emerging technologies in the area of lethal autonomous weapon systems submitted at the August 2018 GGE meeting, new advancements in autonomy in weapon systems hold great promise for strengthening the implementation of IHL.  For example, as I’ve previously mentioned during this presentation, autonomous functions could be used to make weapons more precise and increase the accuracy of human decision-making in stressful and time-critical situations.

In addition, emerging technologies in the area of LAWS could have benefits that extend beyond simply reducing the risk of civilian casualties in military operations.

For example, emerging technologies in the area of LAWS could strengthen efforts to ensure accountability over the use of force by having system logs that automatically record the operation of the weapon system.  This kind of recording could facilitate investigations of both the weapon system’s performance and use.

Automated systems also could identify incidents meriting further review or investigation.  By way of comparison, some banks, credit card companies, and other financial institutions use automated systems to identify suspicious activity and potentially fraudulent transactions.  Weapons systems with autonomous functions could similarly be programmed with reporting mechanisms to highlight unusual uses meriting further review.

Lastly, automated tracking systems could assist in the tracking of unexploded ordnance and fulfilling associated responsibilities under the CCW Protocol V on Explosive Remnants of War. For example, a weapon system that automatically tracked its own fires could identify and record the location where its ordnance did not explode as intended, thereby facilitating the clearance of explosive remnants of war.

To summarize:

(1) Existing IHL, including the requirements of distinction, proportionality, and precaution, provides a comprehensive framework to govern the use of autonomy in weapon systems.

(2) Internal procedures for review and testing, including the legal review of weapons, are essential good practices for implementing IHL requirements. And,

(3) Emerging technologies in the area of LAWS could strengthen the implementation of IHL by, for example,

  • reducing the risk of civilian casualties,
  • allowing for more informed decision-making,
  • facilitating the investigation or reporting of incidents involving potential violations,
  • enhancing the ability to implement corrective actions, and
  • automatically generating information on unexploded ordnance.

Thank you.