An official website of the United States government

Remarks to the Conference on Disarmament on Artificial Intelligence in the Military Domain
As Delivered by U.S. Deputy Permanent Representative Aud-Frances McKernan
August 3, 2023

U.S. Deputy Permanent Representative Aud-Frances McKernan’s Remarks to the Conference on Disarmament  on Artificial Intelligence in the Military Domain 

As Delivered 

Thank you, Mr. President.  Let me begin by expressing my appreciation to Germany for devoting time in our agenda to this discussion of responsible use of artificial intelligence in the military domain.

I would also like to express my thanks to UNIDIR and UNODA for their excellent contributions to this discussion as well.

As the past few months have made clear, AI is already here.  It is quickly spreading to different aspects of our daily lives.  And it is beginning to change how militaries operate.

While the full implications of this transformative technology remain uncertain, what is clear is the impacts will not be confined to particular states or regions.  This general-purpose technology will spread widely.  This is an issue that will affect us all.

I want to focus today on how the United States proposes to work constructively with other responsible states to build consensus around strong and transparent standards and principles for the development and use of AI in the military domain.

Without a careful, responsible approach, some States could rush to deploy systems with unpredictable consequences. At the same time, we should not lose sight of the significant benefits and promise of military AI, including to international security and stability.  AI capabilities could increase accuracy and precision in the use of force, which can help strengthen implementation of protections for civilians as well as civilian objects afforded by international humanitarian law.  AI-enabled decision support could provide commanders with enhanced situational awareness, improving their ability to avoid unintended engagements.

We need therefore to strike a balance that preserves the legitimate and beneficial use of military AI while setting boundaries and working together to articulate a common framework for responsible development and use.

Let me acknowledge the invaluable discussions within the context of the Convention on Certain Conventional Weapons (CCW) Group of Governmental Experts (GGE) on Emerging Technologies in the Area of Lethal Autonomous Weapons Systems (LAWS).

We were pleased to see progress during the May GGE session, resulting in a report that provides a sound basis for future work by the GGE.  We continue to believe that the CCW offers a unique and invaluable forum to discuss these issues, among other reasons because of the military, technical, legal, and policy expertise present within delegations and the focus on international humanitarian law.

We will continue to engage substantively in the work of the GGE, including working with delegations to the November meeting of High Contracting Parties to the CCW to adopt the report and find agreement on a mandate for the LAWS GGE to continue to work in 2024.

Yet the military applications of AI go far beyond autonomous weapons.  AI will impact decision-making, logistics, planning, and communications, as well as other issues.  Accordingly, we must begin to define in far more granular detail what responsible and irresponsible state behavior looks like in the development and use of this technology.  What are the necessary precautions and principles states should follow in military use of AI?

These twin imperatives motivated the United States to propose the Political Declaration on Military Use of Artificial Intelligence and Autonomy in February.

The Declaration lays out foundational principles that we believe militaries should follow in the development and use of AI and autonomy.  These include subjecting systems to rigorous testing and assurance, taking steps to avoid unintended consequences, minimizing unintended biases, and, of course, ensuring AI is used in accordance with States’ obligations under international law.

Again, the Declaration is complementary to, but independent of, the work in the LAWS GGE.  The Declaration focuses on military uses of AI and autonomy broadly, including, but not limited to, weapon systems.

Our aim is not merely to reach consensus on this set of foundational principles, but to work with states to ensure that they are put into practice.  Through expert level exchanges on issues like how to conduct legal reviews or implement best practices to minimize unintended biases, we can help build capacities for responsible AI.

Of course, we do not presume to know all the answers.  Rather we seek to collaborate with states to solve shared challenges in the responsible development and use of AI.  The Political Declaration will serve as a launch pad for sustained dialogue among endorsing states.  And we want this dialogue to be inclusive because we recognize that all states, including those who may not be actively developing military AI capabilities at this time, have a stake in the norms and practices guiding how militaries will use AI in a responsible and stabilizing way.

We also understand that some states think that the challenges presented by AI or autonomous weapons systems can only be addressed through a legally binding instrument.  I want to emphasize that the Declaration does not conflict with or prevent states from pursuing what they may see as other appropriate measures.  In fact, it states explicitly that states will “support other appropriate efforts to ensure that military AI capabilities are used responsibly and lawfully.”

We look forward to continuing these engagements with states and making progress on norms of responsible behavior.  We have an opportunity now to set rules-of-the road for military use of AI while these capabilities and the practices surrounding their use are still maturing.

Let me conclude by noting that responsible military AI is an issue too broad and multi-faceted to remain confined to any single venue.  We appreciate the work of numerous states to elevate these issues and host international conferences, including the Netherlands, the Republic of Korea, Luxembourg, and Costa Rica.

We should take this opportunity for a dialogue on broader implications of military use of AI that are not already addressed in other fora, including the potential impacts on international stability.

Thank you, Mr. President, for facilitating this discussion.