An official website of the United States government

Assistant Secretary Stewart Remarks to the CD: New Types of WMD and New Systems of Such Weapons
May 25, 2023

Remarks to the Conference on Disarmament
Thematic Discussion Agenda Item 5: New Types of WMD and New Systems of Such Weapons

U.S. Assistant Secretary of State Mallory Stewart

May 25, 2023

As Delivered

Madame President, thank you for allowing me to take the floor.  On the occasion of the last formal plenary of your presidency, let me extend, on the behalf of the United States, our deepest appreciation for your thoughtful and effective leadership during this challenging time.

Madame President, distinguished delegates, as the U.S. Assistant Secretary of State for the Bureau of Arms Control, Verification and Compliance, let me begin by expressing my appreciation for the opportunity to speak to you today under the theme of new types of WMD and new systems of such weapons – a topic that covers a wide range of potential issues and challenges.  I intend to focus my remarks today on U.S. efforts to promote the responsible military use of Artificial Intelligence and autonomy.  We firmly believe that this should not be thought of as just a topic for those States considering adoption of AI for military use, but that all States need to understand the potential positive uses of AI as well as the risks associated with an unprincipled approach.

As Ambassador Turner has laid out in recent sessions the imperative to draw from the full spectrum of risk reduction measures as we work to advance our arms control work across the CD’s agenda and within the context of these responsible activities, the transformative impact of emerging and disruptive technologies is particularly relevant and, it is in this regard that I would like to speak today to the complexities and opportunities that artificial intelligence presents for arms control, disarmament, and strategic stability, but also how we can start to address these complexities.

This is not just an academic question — the rapid development, adoption, and application of AI throughout society clearly demonstrate otherwise.  We must understand the challenges AI brings to global security, proactively take advantage of the opportunities its technologies can provide, and address the risks associated with irresponsible use to ensure we are harnessing the benefits of AI safely and effectively.

As a general-purpose enabling technology, AI has transformative potential for societies, economies, and global security, particularly where analyzing large quantities of data is required.

But we must acknowledge that this presents new challenges for arms control and international stability.  Like space and cyber, AI is a capability that increases the complexity of the security environment.

In the military domain, the potential benefits of AI are significant.  AI-enhanced data analysis could optimize logistics processes, improve decision support, and provide commanders with enhanced situational awareness that enables them to avoid unintended engagements and minimize civilian casualties.  AI capabilities could increase accuracy and precision in the use of force which can also help strengthen implementation of international humanitarian law’s protections for civilians and civilian objects.  AI could advance arms control by helping us solve complex verification challenges and increasing confidence in states’ adherence to their commitments.

Yet unlocking these benefits requires a careful and responsible approach to developing safe and reliable AI capabilities.  If you have ever tried experimenting with programs like ChatGPT, you may have experienced how powerful and yet how brittle these technologies can be.  AI models are very convincing, but they sometimes produce information that is inaccurate, lacks context, or is completely made up, and it is up to the informed user to identify those errors that are presented as facts.

The crux of the issue is not whether states develop or use AI-enabled capabilities, but how they do so and how they can do so responsibly.

Again, responsible militaries can and will use AI capabilities to improve decision-making and situational awareness, helping them avoid unintended engagements.  AI capabilities will increase accuracy and precision in the use of force which can help protect human life and civilian objects.  Military uses of AI can make the world safer.

At the same time, military operators and decision makers need to understand the strengths, contexts, and limitations of these systems, particularly how and why they might fail.

Due to the fragility of many existing AI systems, States that rush to harness AI without a careful and well-informed approach could deploy systems with unpredictable consequences – whether this is because the systems are poorly designed, inadequately tested, or users do not possess an adequate understanding of the contexts and limitations of those systems.

But collectively, we have an opportunity to shape the way militaries develop and use AI.  We have a unique opportunity to get ahead of the game and use our discussion as the first step towards creating strong norms of responsible behavior surrounding military uses of AI.

Given these complexities, what is responsible use of this technology?  How do we begin to establish rules of the road to help guide development while minimizing instability?

We believe that it makes sense to start with transparency and responsible behaviors, because the rapid pace of technological development requires a flexible approach.

The United States Department of Defense has made its policies on responsible use of AI publicly available through the publication of numerous documents such as the Responsible Artificial Intelligence Strategy and Implementation Pathway as well as DoD Directive 3000.09 titled “Autonomy in Weapons Systems,” which was recently updated.  This transparency reflects our commitment to responsible behavior in this area and we want to encourage other States to adopt transparent and informed approaches as well.

To begin developing an international consensus on what a normative framework could and should look like, the United States has articulated an initial set of strong and transparent principles for military development and use of AI in the “Political Declaration on Responsible Military Use of Artificial Intelligence and Autonomy” and we are now engaging partners around the world to hear their perspectives on best practices.

The Political Declaration is the beginning of this process and an opportunity to develop a shared understanding of a complex issue through dialogue.

The declaration includes commitments such as to:

  • Adopt, publish, and implement principles for the responsible design, development, deployment, and use of AI capabilities by their military organizations;
  • Commitment to minimize unintended bias in military AI capabilities;
  • And commitments to ensure compliance with international law, in particular, international humanitarian law;

These commitments will promote respect for international law, security, and stability and as we work to expand states’ acceptance or clarification, we hope these commitments will help us reduce the potential risks of accident and unintended escalation.

One such example of where these commitments can help is in strategic decision-making in the nuclear domain.

We all of course must work to ensure that nuclear crises are exceptionally rare and unique, but we recognize therefore, data related to such events is sparse.  In such a crisis, AI-enabled systems may underperform when faced with extraordinary circumstances outside the bounds of their training data, or there may be unexpected interactions among AI-enabled systems operating in a system of systems.

The human factor is also important. Lack of —or perhaps even excess— trust in AI could lead to avoidable tactical mistakes.  Alternatively, the speed at which AI processes information could lead to new capabilities that create decision time compression, causing leaders to make hasty decisions with incomplete information due to increase in the tempo of a conflict.  In any case, miscalculation and unintended escalation could result from the lack of care when applying AI to decision making.

For these reasons, our Political Declaration reiterates our joint commitment with France and the United Kingdom to “maintain human control and involvement for all actions critical to informing and executing sovereign decisions concerning nuclear weapons employment.”  This commitment was also included in the U.S.  2022 Nuclear Posture Review. Again, our Political Declaration is a starting point to help manage the risks and harness the benefits of AI in a time of strategic competition; And these issues are relevant to everyone.  That is why we are seeking broad multiregional clarifications, input and support for the Political Declaration to demonstrate that these are truly a reflection of international perceptions of what are responsible behaviors.

As we have said, arms control is most critical when conditions are ripe for miscalculation, escalation, and spiraling arms races, and normative frameworks are a part of the overall arms control toolkit.  The United States is committed to promoting stability predicated on transparency, predictability and broadly accepted responsible behaviors.

Emerging and disruptive technologies provide opportunities and new complexities to the international security environment, but as our security challenges grow, we must work together to adopt new approaches and methods, including thinking creatively about risk reduction and arms control measures to strengthen global security.

Let me conclude Madame President by thanking you and this extraordinary audience for your attention this morning.  The United States looks forward to further discussions with you on this topic. Thank you.