Convention on Certain Conventional Weapons Group of Governmental Experts on emerging technologies in the area of LAWS
Consideration of the human element in the use of lethal force
Geneva, March 26, 2019
Karl Chang (as prepared)
Thank you Mr. Chair.
As we have explained in intervention on this issue last year and in our 2018 working paper on human-machine interaction, in our view, the key issue for human-machine interaction in emerging technologies in the area of LAWS is ensuring that machines help effectuate the intention of commanders and the operators of weapon systems. In particular, practical measures should be taken to reduce the risk of unintended engagements and to ensure that personnel exercise appropriate levels of human judgment over the use of force. As we noted previously, the term “human control” is subject to divergent interpretations and, in our view, obscures rather than clarifies the genuine challenges in this area.
Many delegations are assuming that autonomy means less control or that the development of an autonomous function in a weapon system entails a delegation of decision-making to a machine.
As we explained in our 2018 working paper, we do not think that these assumptions are accurate. As a factual matter, the use of autonomy in weapon systems can improve the precision and accuracy of the use of force. Moreover, whether a decision is “delegated” in some sense to a machine has more to do with how the weapon is used rather than whether the weapon system itself has an autonomous function.
In our working paper for this year, we discuss how different uses of the autonomous functions in weapon systems can change the analysis of applicable IHL requirements.
What type and degree of human involvement is “appropriate” at a specific period of time can differ across weapon systems, domains of warfare, types of warfare, operational contexts, and even across different functions in a weapon system. Some functions might be better performed by a computer than a human being, while other functions should be performed by humans.
In some cases, less human involvement at the moment force is deployed might be more appropriate. For example, in certain defensive autonomous weapon systems, such as the Phalanx Close-In Weapon System, the AEGIS Weapon System, and Patriot Air and Missile Defense System, the weapon system has autonomous functions that assist in targeting incoming missiles or other projectiles. The machine can strike incoming projectiles with much greater speed and accuracy than a human gunner could achieve manually. As weapons engineers improve the effectiveness of autonomous functions, more situations will likely arise in which the use of autonomous functions is more appropriate than manual control.
Regarding the Chair’s first sub-question regarding human supervision, we are skeptical that it is possible in the abstract to describe the form and degree of human supervision that may be deemed sufficient for compliance with IHL.
There is always a point during the operation of any weapon system after which it is not possible to engage in “supervision,” such as intervening and aborting the operation of the weapon system. This is the case in weapon systems with autonomous functions and weapon systems without autonomous functions.
Monitoring the operation of the weapon system, including ability to intervene and abort during the operation of a weapon system can be an important precaution that contributes to the protection of the civilian population. For example, U.S. forces in recent operations have sometimes been able to use a technique, described as “shift cold,” to divert missiles when they observe an unexpected presence of civilians.
Of course, many weapons systems do not have this capability. Indeed so-called “dumb” weapons without autonomous functions are less likely to have the capability to be “supervised” during the operation of the weapon system. Moreover, whether it is feasible to have such a mechanism on a weapon system and to use it in particular circumstances will depend on many factors, including military and humanitarian considerations.
Regarding the Chair’s second sub-question concerning predictability and reliability, under DoD policy, autonomous and semi-autonomous weapons systems go through “rigorous hardware and software verification and validation (V&V) and realistic system developmental and operational test and evaluation (T&E).” Although rigorous testing and sound development of weapons are not required by the law of war as such, these good practices can support the implementation of law of war requirements. Rigorous and realistic testing standards and procedures can ensure that commanders and national security policy makers can have a reasonable expectation of the likely effects of employing the weapon in different operational contexts. In addition, such practices can help reduce the risk of unintended combat engagements, such as weapons malfunctions that could inadvertently cause harm to civilians.
Those are the good practices that DoD employs to help implement IHL requirements. As far as IHL requirements are concerned, we would make two comments. First, under IHL, the predictability and reliability of autonomous functions in weapon systems are important insofar as they relate to the risk of civilian casualties. For example, if the weapon system is unpredictable to the adversary but poses no risk to civilians, then that is not a problematic system from the perspective of IHL.
Second, the degree of predictability and reliability of an autonomous function in a weapon system that is required by IHL in order to reduce the risk to civilians must be considered in light of all the circumstances, including the operating environment as well as the other precautions that can be taken to reduce the risk to civilians and civilian objects. Ultimately, what is important is that States, commanders, and combatants, exercise due regard to reduce the risk of incidental harm to the civilian population and other persons and objects that may not be made the object of attack. This standard of due regard must be assessed based on the general practice of States and common standards of the military profession in conducting operations.
Regarding the Chair’s third sub-question regarding various factors and IHL compliance,
One of the main purposes of IHL, including one of the objects and purposes of the CCW, is the protection of civilians against the harmful effects of hostilities. Factors such as the environment of deployment, specific constraints in time of operation, or scope of employment over an area can be relevant in assessing the potential likelihood of civilian casualties from an intended use of a weapon system with autonomous functions. For example, if the environment of deployment is one that is largely devoid of civilians, such as a remote area, the likelihood of civilian casualties from the deployment of the weapon can similarly be low. Similarly, constraints in time of operation can also assist in the analysis of the risk of civilian casualties. For example, if civilians are temporarily evacuted from an area, then a weapon system could be deployed to that area for such time as the evacuation with a corresponding confidence that the risk of civilian casualties would be lessened to the degree to which the evacuation had been effective. Similarly, the use of self-deactivation, self-destruct, or self-neutralization mechanisms also assist in ensuring that civilians are not harmed after the operation of the weapon system. Lastly, the scope of movement over an area is also relevant in allowing commanders to better assess the potential risk of civilian casualties. In addition, the area in which a system would have effects can be of particular relevance when a military commander is targeting an area of land that constitutes a military objective.
Regarding the Chair’s fourth sub-question regarding IHL-compliant human-machine interaction, in our view, IHL does not address human-machine interaction as such. Rather, good practices for human-machine interaction support IHL compliance, by reducing the risk of civilian casualties and other unintentional engagements. Weapons with autonomous functions can be used consistent with IHL.
The U.S. Department of Defense has outlined what it believes to be good practices in human-machine interaction in its internal policy Directive 3000.09, which we have described in detail in our previous working papers. For example, this policy provides for the interface between people and machines for autonomous and semi-autonomous weapons to: (1) be readily understandable to trained operators; (2) provide traceable feedback on system status; and (3) provide clear procedures for trained operators to activate and deactivate system functions. These requirements to improve human-machine interfaces assist operators in making accurate judgments regarding the use of force.