Meeting of the Group of Governmental Experts on Lethal Autonomous Weapons Systems.
U.S. Delegation Statement on Human-Machine Interaction
Delivered by Charlie Trumbull
Attorney Advisor, U.S. Department of State
Geneva, August 28, 2018
Thank you Mr. Chair.
We appreciate your summary of the discussion on characteristics, which gave us much to think about. We would now like to add our own key take-aways and reflections on this agenda item.
First, we think there is general agreement that advances in technology present both opportunities and challenges, and therefore this technology should not be stigmatized.
Second, delegations have presented a variety of different characterizations of LAWS, which demonstrated how difficult it is to define specific characteristics with any precision.
Third, there seems to be agreement that it is not necessary to arrive at a specific definition of LAWS in order to continue discussions on how IHL applies to emerging technologies. Although reaching consensus on characterization may be important for work in other areas, it is not a prequisite for further consideration of IHL implementation, such as weapon reviews.
Fourth, as the Chair mentioned, we have observed a shift in focus on the technical aspects of weapons systems to the type of decisions that the weapon system would be engineered to make. In other words, what functions are autonomous features performing in the weapon system?. We agree that this focus on the type of decisions that machines would be making or the purpose of the autonomous functions in the weapon system is important, but we want to flag two points in this regard.
1. We disagree that there is an inverse correlation between the degree of autonomy and the degree of human control. As we have noted in our working paper and our interventions, autonomy can increase the control over the effects of munitions after deployment and better effectuate the commander’s intent. Therefore, we recommend that the accountability approach to characteristics that the Chair has proposed not be framed in terms of whether the decision is “handed over to machines,” or “related to the loss of human control.” Instead, that approach to characteristics should be framed in terms of whether the machines have been engineered with the capability to make certain decisions and the context in which a commander or operator decides to employ them.
2. Similarly, we would note that the use of autonomy to perform certain functions does not necessarily pose accountability concerns, as humans retain responsibility for the employment of a system capable of exerting force. We support further discussion of accountability considerations, but note that accountability issues are not necessarily tied to the different types of functions that autonomy performs in a weapon system.
Mr. Chair, we also want to take this opportunity to address a question you posed yesterday, specifically whether and how a weapon that “selects and engages” targets differs from weapons that “self-initiate attacks.” For the U.S. delegation, “Select and Engage” refers to a weapon that has autonomous functions that allow it to choose among programmed targets or target sets and direct its force against them. The system might utilize programmed target signatures or characteristics (such as radar signature, acoustic signature, infra red signature, etc.) to identify and home-in on and strike a specific target within a target area or target group that has been selected by a human operator or commander.
For example, the High-Speed Anti-Radiation Missile (HARM) is launched into a target area to suppress known or suspected hostile radars. The key point is that a human operator or commander launches the HARM into a specified geographic area to achieve a specific military effect. In this example, the weapon would have functions that enable it to “select and engage” targets, but it would not be accurate to describe the weapon as “self-initiating attacks.” In the U.S. DoD Directive, the HARM weapon system would be classified as “semi-autonomous” because of how it is used, even though the weapon has the functionality of “selecting and engaging” targets.
Lastly, we would suggest that a key consideration in evaluating autonomous functions in weapon systems is whether those functions further the commander’s intent or undermine it. Given that law-abiding commanders will seek to only target military objectives and minimize or avoid collateral damage, autonomous capabilities that promote the commander’s intent should enhance compliance with IHL.