Flag

An official website of the United States government

U.S. Statement at the GGE on LAWS – Agenda Item 5(c)
10 MINUTE READ
August 4, 2021

Agenda item 5(c)

Further consideration of the human element in the use of lethal force and aspects of human-machine interaction in the development, deployment and use of emerging technologies in the area of LAWS

Delivered by Amanda Wall

1st session of the 2021 Group of Governmental Experts (GGE)
on emerging technologies in the area of lethal autonomous weapons systems (LAWS)

Geneva, August 4, 2021

We have appreciated the emerging consensus for further work in the area of human-machine interaction.  This is an area in which further common understandings can and should be built.  Guiding Principle (c) is an excellent foundation on which to build these additional common understandings.  This guiding principle recognizes that human-machine interaction should ensure IHL compliance, and also recognizes the need to consider human-machine interaction comprehensively, across the life cycle of the weapon system.  Therefore, in our view, a positive next step for the GGE in this area would be to elaborate on good practices in human-machine interaction that can strengthen compliance with IHL.  As our UK colleagues very clearly and helpfully noted this morning, IHL is specially adapted to regulating the use of weapons in armed conflict, and our discussion of human-machine interaction should therefore, as reflected in Guiding Principle (c), be for the purpose of strengthening implementation of IHL.

The United States proposed a new conclusion on human-machine interaction for the GGE’s consideration, along these lines.  It begins by stating that:

“Weapons systems based on emerging technologies in the area of LAWS should effectuate the intent of commanders and operators to comply with IHL, in particular, by avoiding unintended engagements and minimizing harm to civilians and civilian objects.”

This conclusion is drawn from real-world practice in human-machine interaction and also recognizes that IHL imposes requirements on human beings.  Our proposal then goes on to elaborate three categories of measures across the lifecycle of the weapon:

  • Weapons systems based on emerging technologies in the area of LAWS should be engineered to perform as anticipated. This should include verification and validation and testing and evaluation before fielding systems.
  • Relevant personnel should properly understand weapons systems based on emerging technologies in the area of LAWS. Training, doctrine, and tactics, techniques, and procedures should be established for the weapon system.  Operators should be certified by relevant authorities that they have been trained to operate the weapon system in accordance with applicable rules.  And,
  • User interfaces for weapons systems based on emerging technologies in the area of LAWS should be clear in order for operators to make informed and appropriate decisions in engaging targets. In particular, interfaces between people and machines for autonomous and semi-autonomous weapon systems should: (i) be readily understandable to trained operators; (ii) provide traceable feedback on system status; and (iii) provide clear procedures for trained operators to activate and deactivate system functions.

We are interested in the views of GGE participants on these proposed new conclusions.  We believe that the GGE could support States in implementing IHL by endorsing these good practices.  We also believe that these good practices provide a basis for further discussion and intergovernmental exchanges.  For example, one of the Department of Defense’s principles for the ethical use of artificial intelligence is such a good practice in ensuring that relevant personnel properly understand the weapon system, if AI capabilities are used in an autonomous or semi-autonomous weapon system.  This DoD AI ethical principle is called “Traceable” and provides that:

The Department’s AI capabilities will be developed and deployed such that relevant personnel possess an appropriate understanding of the technology, development processes, and operational methods applicable to AI capabilities, including with transparent and auditable methodologies, data sources, and design procedure and documentation.

Additionally, to follow up on a point that a variety of delegations have mentioned, including our colleagues from the UK and France this morning, human responsibility is a critical aspect of the normative and operational framework for the use of emerging technologies in the area of LAWS.  We therefore believe it would be productive for the GGE to address how well-established international legal principles of State and individual responsibility apply to States and persons who use weapon systems with autonomous functions.   The United States has proposed eight new conclusions on human responsibility for the GGE’s consideration.

  1. Under principles of State responsibility, every internationally wrongful act of a State, including such acts involving the use of emerging technologies in the area of LAWS, entails the international responsibility of that State.
  2. A State remains responsible for all acts committed by persons forming part of its armed forces, including any such use of emerging technologies in the area of LAWS, in accordance with applicable international law.
  3. An individual, including a designer, developer, an official authorizing acquisition or deployment, a commander, or a system operator, is responsible for his or her decisions governed by IHL with regard to emerging technologies in the area of LAWS.
  4. Under applicable international and domestic law, an individual remains responsible for his or her conduct in violation of IHL, including any such violations involving emerging technologies in the area of LAWS. The use of machines, including emerging technologies in the area of LAWS, does not provide a basis for excluding legal responsibility.
  5. The responsibilities of any particular individual in implementing a State or a party to a conflict’s obligations under IHL may depend on that person’s role in the organization or military operations, including whether that individual has the authority to make the decisions and judgments necessary to the performance of that duty under IHL.
  6. Under IHL, a decision, including decisions involving emerging technologies in the area of LAWS, must be judged based on the information available to the decision-maker at the time and not on the basis of information that subsequently becomes available.
  7. Unintended harm to civilians and other persons protected by IHL from accidents or equipment malfunctions, including those involving emerging technologies in the area of LAWS, is not a violation of IHL as such. And,
  8. States and parties to a conflict have affirmative obligations with respect to the protection of civilians and other classes of persons under IHL, which continue to apply when emerging technologies in the area of LAWS are used. These obligations are to be assessed in light of the general practice of States, including common standards of the military profession in conducting operations.

We look forward to discussing these and other proposals with other delegations.  In particular, we appreciated the intervention of our colleague from the Netherlands which highlighted specific examples for consideration by the group.

I also wanted to thank the Indian Ambassador for considering the U.S. Department of Defense definition of autonomous weapon, and I wanted to try to address the question that you posed.

By way of clarification, this directive was developed after a survey of existing U.S. practice in using autonomy in weapon systems.  The U.S. DoD definition of autonomous weapon systems was intended to include existing weapon systems like the Counter-Rocket Artillery and Mortar System and the AEGIS weapon system.  The U.S. DoD Directive does not ban autonomous weapon systems, nor does it reflect a view that autonomous weapon systems are prohibited by IHL.

In addition, I would like to make another clarification.  Under U.S. military definitions, an autonomous weapon is one “that, once activated, can select and engage targets without further intervention by a human operator.”   However, the operation of the weapon system, after activation, is only the “tip of the iceberg” in terms of human-machine interaction.

For example, as our French colleagues highlighted before lunch, the human involvement occurs from early developmental stages through its employment.  In this regard, I would note the U.S. military practice that I mentioned previously, including requirements – at different stages of the weapon design, development, and deployment process intended to ensure the use of autonomy in weapon systems effectuates human intentions, such as engineering weapon systems to perform reliably, training personnel to understand the systems, and establishing clear human-machine interfaces.

Furthermore, as our Dutch colleagues addressed yesterday, human involvement occurs throughout the targeting cycle, which UNIDIR also has helpfully discussed in its exercise earlier this year.  This targeting cycle is an existing decision-making framework into which the operation of LAWS would fit with other weapons.  This targeting cycle includes consideration of the operational context, including commander’s guidance, military objectives, targets, environment, and weapons capabilities.  Human-machine interaction does not simply relate to manual operation or direct manipulation of a weapon system.  Rather, human-machine is a much broader concept that includes the targeting cycle and the life-cycle of the weapon.  This is a point that my colleagues from the UK, Switzerland, and the Netherlands just highlighted very usefully.

Lastly, I think your question highlights the problems with trying to advance our understanding by adopting novel and ambiguous standards, like “meaningful human control”.  Introducing vague concepts can confuse more than clarify.  In our view, we should proceed by seeking to clarify the application of existing IHL requirements, while also seeking to identify good practices that support the implementation of IHL.