The United States and China are racing to develop and integrate artificial intelligence (AI) into their economies, societies, and militaries. Will the technology cause the two countries’ already fragile relationship to go awry?
There is deep concern across the international community. In 2023, Tshilidzi Marwala, undersecretary-general of the United Nations, sounded an alarm, warning that militarized AI uses “raise concerns regarding the escalation of conflicts [and] the possibility of autonomous weapons being compromised or misused.” And in 2024, a group of scientists from Harvard and MIT argued that “we are concerned the recent embrace of AWS [autonomous weapons systems] by global militaries is leading to a future where wars are more frequent, with such warfare having negative consequences for global stability even if AWS reduce civilian casualties relative to human soldiers.”

There is so much agreement between the United States and China about the dangers of AI that in 2024 Presidents Biden and Xi Jinping, in an otherwise unusual moment of alignment, announced they would ensure human control over any decision to launch nuclear weapons.
Much is still uncertain about how the technology will advance and manifest itself in the two countries’ strategic competition. Here’s what AI pathways to escalation look like, and what the United States and China might do to make the technology less dangerous.
How to think about escalation
International-security scholars broadly think of escalation as three pathways.
In the first, deliberate escalation, states choose to initiate conflict because they think they have a first-strike advantage and can win their objective by pre-emptively attacking. Technology can create incentives for deliberate escalation by creating an asymmetric advantage or imbuing confidence in first-strike campaigns.
The pathway to deliberate escalation can be especially alluring for states fearful they wouldn’t survive an adversary’s attack. This was, for example, the dominant US nuclear strategy for much of the Cold War, when estimates for conventional warfare in Europe suggested that without US use of nuclear weapons, the Soviets would quickly overrun NATO forces.
For AI, pathways to deliberate escalation are fundamentally a problem with overconfidence and (perhaps misplaced) certainty. When AI gives decision makers confidence that they can launch a pre-emptive war—perhaps because they believe AI has successfully identified an adversary’s nuclear arsenal or vital military centers of gravity—then the technology might induce a state to deliberately escalate to conflict.
If a state believes that AI could enable an overwhelming blow to its opponent, then deterrence is more likely to fail.

The second pathway to escalation is inadvertent. If deliberate escalation is about how technology creates certainty, inadvertent escalation is about how technology creates uncertainty. Inadvertent escalation occurs when there are misperceptions about intentions, often created or enhanced by the technology.
For example, when new technology cannot be clearly differentiated between offense or defense or when technology entangles civilian-military or conventional-nuclear missions, this can lead to uncertainty about what a state intends to do with the new technology.
That uncertainty leads to fear and can create incentives for arms races and instability, even between countries that would otherwise prefer not to go to war.
Here, the implication for AI is that uncertainty surrounding how the technology will be used could induce states to develop more dangerous AI applications. For example, concern that the US quest for generative AI will speed up crisis decision-making could induce China to adopt similar AI technologies. Diffusion of AI decision-making aids within air defense or early warning systems could seem like a natural tit-for-tat against innovations in automated strike capabilities—such as drone swarms or autonomous missile volleys—but the cycle of arms development could inadvertently spiral into the creation of increasingly dangerous arsenals of AI-enabled weapon systems designed to launch on a data hair trigger.
And this brings us to the final escalation pathway, accidental conflict. Unlike the two other pathways to conflict, which involve intentional (if not always instrumental) violence, accidental escalation occurs when states have not deliberately chosen to initiate conflict. Instead, perhaps because of a technological failure, violence occurs unintentionally.
Pathways to accidental use can almost always be traced to organizational decisions that maximize risk, perhaps because of organizational cultures but also because of increased tensions between nations. Technologies are more likely to lead to accidental escalation when they aren’t tested appropriately, when they are designed with few safety or backup options, or when operators aren’t trained to use the technologies safely.
Pathways between AI and accidental escalation may be the clearest to see. Poorly trained models, biased algorithms, model hallucinations, and rogue models could all lead to violence if AI-enabled weapon systems are not developed, vetted, and adopted appropriately. Further, even appropriately designed AI models may be more likely to err in conflict when their components are under attack and vulnerable to both degradation and manipulation.
A warning from wargames
Which of these pathways are most likely to lead artificial intelligence awry? What can the United States and China do before crises to decrease the chance that AI leads to war?
First, evidence from wargames suggests that LLMs (large language models) may have a bias towards escalation. In comparison with human players in wargames, LLMs’ decision making can be superficial and cartoonish—a caricature of human decision-making instead of a useful substitution for humans in the loop.
Further, wargames reveal how overconfidence in the technology, paired with concerns about digital vulnerabilities, might induce states to adopt AI technologies prematurely within even the most vital nuclear decisions.
Together, these findings suggest AI could increase incentives for all three forms of escalation: accidental, inadvertent, and deliberate.
What America and China can do
What can the United States and China do to decrease the dangers of AI and escalation?
The first step for addressing overconfidence in AI and the danger of deliberate escalation is to educate users on its limitations—hallucinations, false positives, bias, manipulations. Militaries need to work hand in hand with scientists not just to harness the promise of AI but also to build requirements, testing, and training that decrease the risk of AI implementation. Militaries also should consider building AI specialties within typical ground, naval, air, and space units so that soldiers, sailors, airmen, and guardians can interrogate the technology and troubleshoot its weaknesses in the same ways they do with rifles, airplanes, tanks, or ships.
Decision makers also need training on AI—enough to dispel some of the pixie-dust hype of the new technology and prepare them to carefully use AI-enabled tools as they issue orders, design campaigns, or craft foreign policies.
Solving inadvertent escalation is a matter of decreasing dangerous uncertainty about AI. Scholarship on crisis stability suggests that states should focus on AI for defensive missions like early warning or treaty monitoring—something akin to the Cold War’s “open skies,” which built transparency and trust between the United States and the Soviet Union at the height of nuclear distrust. However, these kinds of confidence-building measures are difficult to implement given the level of distrust between the United States and China.

There are almost no official military talks or engagements between the US military and the People’s Liberation Army (PLA); US leaders struggle to directly contact their Chinese counterparts during crises; and Chinese leaders seem to view the US-USSR arms control arrangements as a symbol of the Soviet Union’s demise—not as a constructive analogy for stability between two countries.
Is there a solution for preventing inadvertent or accidental escalation despite the deep distrust between the United States and China? It is precisely when trust is at its lowest that states must make unilateral safety decisions.
The United States’ public announcement of AI safety and testing standards for the military is a good-faith first step that demonstrates how militaries can design and adopt AI technologies that prioritize safety and stability. The PLA, which has not been transparent about its AI safety efforts, would also benefit from similar standards—not only to decrease the risk of inadvertent escalation with the United States but also to safeguard its own force against AI accidents that might lead to fratricide or civilian collateral damage.
Finally, the 2024 Biden-Xi “human control” nuclear statement was an important first step in identifying shared AI risk for both countries. However, no further work has been done to refine the statement or to expand the understanding to other nuclear states. Given the distrust between the United States and China, this should be given the highest priority.
Jacquelyn Schneider is the Hargrove Hoover Fellow at the Hoover Institution, the director of the Hoover Wargaming and Crisis Simulation Initiative, and an affiliate with Stanford University’s Center for International Security and Cooperation. She participates in Hoover’s Global Policy and Strategy Initiative and its Semiconductors and the Security of the United States and Taiwan research project.
