CCW: U.S. Statement on Characterization of the Systems under Consideration

U.S. Statement on Characterization of the systems under consideration in order to promote a common understanding on concepts and characteristics relevant to the objectives and purposes of the CCW

Delivered by Matt McCormack
Geneva, April 10, 2018

-As Delivered-

Second Intervention

Thank you Chair, for giving our delegation the floor once again.

Earlier today, we explained why the United States continues to believe that it is unnecessary for the GGE to adopt a specific working definition of LAWS, but we also emphasize that we support identifying general characteristics of systems that should be under the GGE’s consideration.

During this intervention, we would like to provide some specific recommendations related to identifying such general characteristics of systems that could promote our understanding of the relevant concepts or issues in these GGE discussions.

In this regard, we would offer for consideration the approach adopted by the U.S. Department of Defense in its internal policy related to incorporating autonomy in weapon systems. Although the GGE’s purpose in characterizing systems may be different than the purposes for which the U.S. Department of Defense definitions were created, we believe the approach we adopted can help the GGE identify relevant characteristics for systems that should be under the GGE’s consideration.

The U.S. Department of Defense policy uses definitions that generally focus on what we believe to be the most important issues posed by the use of autonomy – the fact that people who employ these weapons would, after their activation, be relying upon the weapons’ capabilities either to “engage individual targets or specific target groups selected by the human operator” or to “select and engage targets without further intervention by a human operator.” The important character about these systems is that people who employ these weapons are using the capabilities of the weapon to engage in targets selected by the operator or both to select and engage targets.

We will not repeat the specific definitions today, but, for reference, those definitions are reproduced in the U.S. working paper from November 2017. We’d also note that some of those definitions have also been reproduced in the summary chart produced by the Chair.

Upon review the definitions you will see that the definitions in U.S. Department of Defense policy do not rest upon articulating specific levels of autonomy or types of machine reasoning. We would recommend the same approach for the GGE.

Our sense is that characterizing systems through technological categories or seeking to define concepts like “artificial intelligence” would be especially ill-advised because there are already diverse taxonomies along these lines and because scientists and engineers continue to develop technological advancements.

Such characteristics could soon be rendered obsolete by technological developments.

Also, seeking to characterize systems based on the sophistication of the machine intelligence would also incorrectly focus on the machine, rather than understanding what is important for the law – how human beings are using the weapon and what they expect it to do.

U.S. Department of Defense policy does not use definitions to decide in the abstract what may be good or bad uses of autonomy. The GGE should also not create characterizations based upon abstract conceptions of autonomy.  The responsible use of autonomy in weapon systems necessarily depends upon its specific, intended application, rather than its application in the abstract.

In this regard, the U.S. Department of Defense policy requires that certain types of new weapon systems receive an additional case-by-case, senior-level review and approval before formal development and again before fielding.

A weapon system can only be properly assessed in light of the operational context in which it is intended to be used. Much like with weapons that do not use autonomy, controls such as temporal, spatial, and warhead/effector constraints could enable legal and ethical use of weapon systems that use autonomy.  Assessing weapons in the abstract ignores most of the relevant considerations about their legal or ethical use.

We also found that autonomy in the weapon system can improve the implementation of IHL and reduce civilian casualties, which illustrates a fundamental feature of IHL – IHL often reflects the convergence of military and humanitarian interests. Case-by-case review, rather than considering weapons in the abstract, allows for such issues to be assessed most accurately.

We have also not based our definitions upon trying to categorize weapons as either “offensive” or “defensive.” The GGE’s characterizations should also avoid such distinctions.

Military operations are sometimes characterized as “offensive” or “defensive” based on the nature of the operation in question, rather than the capabilities employed in support of those operations. Weapons, however, are not inherently “offensive” or “defensive.”

For example, weapons that are commonly thought of as “defensive” weapons are capable of being used to protect forces invading another country. And, conversely, weapons that are commonly thought of as “offensive” weapons are capable of being used by the forces defending their territory.

Most importantly for proper characterizations, whether people who employ these weapons can rely upon these weapon systems to select and engage lawful targets do not depend on characterizations such as “offensive” or “defensive” – even if it was possible to agree upon which weapons would considered as such.

Similarly, we have also not based our definitions in trying to categorize weapons by their design purpose – for example, as either “anti-personnel” or “anti-material.” We also don’t view “lethality” as a particularly helpful qualifier in this regard.

Frequently, weapons that are designed to target objects, such as enemy aircraft or tanks, are implicitly designed to target the military personnel that operate them.

Also, many of the improvements that can be made in the implementation of IHL and in reducing civilian casualties could relate to anti-personnel applications.

These issues aside, we do not think that such characterizations are helpful for understanding concepts and issues related to autonomy. Whether people who employ weapons that incorporate autonomy can rely upon these weapon systems to select and engage lawful targets do not depend upon characterizations such as “anti-personnel,” “lethality” or “lethal,” or “anti-material.”

Thank you.