U.S. Delegation Statement on Human-Machine Interaction
At the Meeting of the Group of Governmental Experts of the High Contracting Parties to the CCW on Lethal Autonomous Weapons Systems
Delivered by Karl Chang
Associate General Counsel, U.S. Department of Defense
Geneva, August 28, 2018
Thank you Mr. Chairman; we would like to make three overarching points.
First, in the U.S. view, the key issue for human-machine interaction in emerging technologies in the area of LAWS is to ensure that machines help effectuate the intent of commanders and the operators of weapons systems. This is done by taking practical steps to reduce the risk of unintended engagements and to enable personnel to exercise appropriate levels of human judgment over the use of force.
This approach supports compliance with the law of war. Weapons that do what commanders and operators intend can effectuate their intentions to conduct operations in compliance with the law of war and to minimize harm to civilians and civilian objects.
Human-machine interaction is an issue that the U.S. Department of Defense has sought to study and address proactively, in a transparent way, through the issuance of DoD Directive 3000.09, titled Autonomy in Weapon Systems.
We recommend that States focus our discussions on ensuring that weapon systems with autonomous functions operate as intended under appropriate levels of human judgment, reducing the risks of unintended engagements or accidents. This is a clear and practical objective that can have real-world benefits by, for example, facilitating the exchange of best practices in autonomy and weapon system safety.
Our second point is that we do not think that human or “manual” control is the correct issue to focus on. The concept of “human control” is subject to divergent interpretations that can hinder meaningful discussion.
Many delegations are assuming that autonomy means less control or that the development of an autonomous function in weapon system entails a delegation of decision-making to a machine.
As we explain in our working paper, we do not think that these assumptions are accurate. As a factual matter, we think that the use of autonomy in weapon systems can improve the degree of control over the use of force. Moreover, whether a decision is “delegated” in some sense to a machine has more to do with how the weapon is used rather than whether the weapon system itself has an autonomous function.
A weapon with a function for selecting and engaging targets can be used without delegating any decision-making to the machine, in the sense of substituting the human’s decision with the machine’s decision.
For example, the artillery system described by Swedish colleagues this morning is more accurately described as enabling additional decisions to be made, rather than substituting for human decisions. As our Swedish colleague noted, the commander could lawfully use regular artillery without autonomous capabilities, provide that such use was in accordance with IHL. However, the additional autonomous or “smart” capabilities make the weapon more precise and reduce the risk of incidental harm.
In the U.S. perspective, there is nothing intrinsically valuable about manually operating a weapon system as opposed to operating it with an autonomous function.
For example, existing international humanitarian law instruments, such as the CCW and its Protocols, do not seek to enhance “human control” as such. Rather, these instruments seek, inter alia, to ensure the use of weapons consistent with the fundamental principles of distinction and proportionality, and the obligation to take feasible precautions for the protection of the civilian population. Although control over weapon systems can be a useful means in implementing these principles, “control” is not, and should not be, an end in itself.
Our third point is that there is a convergence between the military interest in more effective and efficient weaponry and the humanitarian interest in reducing the harmful incidental effects of war.
Fundamentally professional militaries have an interest in the efficient use of force.
All law-abiding militaries have strong interests in avoiding civilian casualties and unnecessary destruction. These are ineffecient and counterproductive.
Thus, better military technology is likely to mean better compliance with IHL.
Technological innovation in weapon systems by responsible States is likely to result in weapons that are more precise and cause less incidental suffering.
We note that “smart” weaponry has often been used to reduce the risk of civilian casualties. New types of weapons systems, such as the Single Sortie Detect to Engage System, on which my colleague presented, can be used to protect civilians.
We also note that autonomy is a function that has been employed in weapons for decades. Thus, the technical capability to make autonomous weapons that would fire indiscriminately has existed for some time. However, such weapons have largely not been developed because such weapons would not be militarily useful.
Because advances in autonomous technologies could support both military and humanitarian interests, the United States believes that we need to be very careful in addressing issues of emerging technologies. We must not stigmatize new technologies nor seek to set new international standards. Instead, States should ensure the responsible use of emerging technologies in military operations by implementing holistic, proactive review processes that are guided by the fundamental principles of the law of war.
For its part, the United States is committed to the responsible use of emerging technologies in the area of LAWS. The U.S. military has long been a leader in adhering to the Law of War in military operation. DoD remains committed to the legal and ethical use of new technologies, including the use of autonomy in weapon systems. In its use of new technologies, the U.S. military will always be guided by the fundamental principles of the law of war, including the possibility of humanitarian applications, such as reducing the risk of civilian casualties.