Convention on Certain Conventional Weapons (CCW)
Group of Government Experts on Lethal Autonomous Weapons (LAWS)
Geneva, April 13, 2018
As delivered by Ian R. McKay
Thank you Mr. Chairman,
We have heard references to the possibility of legally binding instruments or political declarations as possible policy options. We again emphasize the need to develop a shared understanding of the risks and benefits of this technology before deciding on a specific policy response. We remain convinced that it is premature to embark on negotiating any particular legal or political instrument in 2019.
We welcome suggestions for realistic options to pursue, and found the working paper from Argentina to be particularly constructive in this regard. Although we’ve heard the message from some States that weapons reviews are not sufficient, we think further consideration by this group of weapons reviews, without prejudice to other elements of the groups’ work, would be worthwhile. The United States remains interested in sharing practice on weapons reviews with a view towards more States understanding their usefulness in improving States’ implementation of their existing legal obligations and for States that currently conduct such reviews to have an opportunity to learn about good practices of others. Although the United States views the review of the legality of weapons as a best practice for implementing customary and treaty law relating to all weapons and their use in armed conflict, weapons that use autonomy in target selection and engagement seem unique in the degree to which they would allow consideration of targeting issues during the weapon’s development. For example, if it is possible to program how a weapon will function in a potential combat situation, it may be appropriate to consider the IHL implications of that programming. In particular, it may be appropriate for weapons designers and engineers to consider measures to reduce the likelihood that use of the weapon will cause civilian casualties.
With regard to improving the implementation of IHL, the United States also believes that this GGE’s focus should be to understand better how autonomy in weapon systems can be used in compliance with IHL or even to improve implementation of IHL. Advances in technology can help promote greater compliance with IHL and reduce harm to civilians and civilian objects. After all, societies often use machines to help humans perform certain functions better than a human can. Warfare is not different.
For example, as delegations saw during the brief on C-RAM, C-RAM was designed to fulfill an important military need, which provided an opportunity to improve the implementation of IHL. Using C-RAM allows U.S. forces to rely less on other, less than optimal methods of trying to defend U.S. and friendly forces and civilians from insurgent missile and mortar attacks. With C-RAM, military personnel were better able to implement the principle of distinction and to greatly reduce risks to the civilian population. IHL is such a robust and coherent legal framework because it often reflects this convergence of military and humanitarian interests
IHL should continue to be central to our collective work and the aims of this GGE. For example, during yesterday’s panel discussion, the Russian scientist posed two important questions regarding the implementation of IHL and the human dignity of innocent civilians who are subjected to the risks of war. I am not able to quote the Russian scientist’s questions exactly, but the questions he raised are the GGE should reflect upon more closely, especially with regard to ways in which control and judgment really matter in the use of force with weapons that use autonomy. Those questions were “From an IHL perspective, is it better than you have a human err and innocent civilians die? Or, is it better to have an autonomous system perform the same targeting function more precisely and with less chance of error?”
Even though these are important questions from a humanitarian perspective, we recognize that issues about the implementation of IHL and appropriate levels of human judgment over use of force are certainly more complex than just these two questions, but they are also more complex than what is reflected in a phrase like “meaningful human control.” As a general matter, weapons that use autonomy vary greatly depending upon their intended use and context. There is no “one-size-fits-all” standard for the correct level of human control or judgment to be exercised over the use of force and at particular so-called “touch points.” What may be “appropriate” for some weapons or situations, may not be “appropriate” for others. Although appropriate levels of human judgment should be retained over the use of force, issues about control are complex. Different types of control during different parts of a weapon’s life cycle will matter for its appropriate use. And in some circumstances, such as to ensure that the weapon works correctly for the circumstances in which it was designed to operate, it will be more appropriate to have less physical control over the actual weapon to avoid human misuse or hacking, to reduce risks of errors, or just because of military necessity. Weapons with autonomy should be designed to minmize the probability and consequences of failures that could lead to unintended engagements.