“It did not take atomic weapons to make man want peace. But the atomic bomb was the turn of the screw. The atomic bomb made the prospect of future war unendurable. It has led up those last few steps to the mountain pass; and beyond there is a different country.”
— Robert Oppenheimer, commencement address at the University of Pennsylvania, February 28, 1946.
In the annals of transformative technologies, the detonation of the first atomic bomb in New Mexico’s desert in July 1945 stands as a world-changing moment. Today, many query whether the emergence of artificial superintelligence (ASI), an artificial intelligence that surpasses humans in every cognitive field of endeavor, might herald a similarly profound inflection point in global affairs. Experts and government officials regularly draw analogies between the birth of nuclear weapons and the potential dawn of ASI, suggesting that the lessons of the atomic age—from the Manhattan Project to the brief period of US nuclear monopoly to the arcane intricacies of Cold War deterrence theory—might help illuminate the promises and perils of ASI. But how appropriate and useful is this analogy? More importantly, if one is to reach beyond this imperfect parallel, what might the growing use and integration of advanced AI mean for nuclear deterrence?
An Imperfect Analogy
If one starts from the premise that no single analogy is ever perfect, but that the process of analogical reasoning is itself a natural, and deeply human, way of thinking through labyrinthine problem sets then, yes, there is a partial utility in resorting to this parallel. The emergence of nuclear weapons on the world stage was a paradigm shift—one that resulted from remarkable breakthroughs in scientific ingenuity, but also embodied a new, and terrifying, form of existential danger to mankind. In so doing, it abruptly compelled strategists and decision-makers to reconceptualize many, if not all, of their core assumptions regarding the nature of coexistence, competition, and conflict. As the renowned strategist Bernard Brodie famously observed in 1946, “Thus far the chief purpose of our military establishment has been to win wars. From now on its chief purpose must be to avert them.”
Indeed, the threat of mutual annihilation and species-wide extinction underlined the pressing need for revised concepts of deterrence and stability, ones better suited for the atomic age, even as morally conflicted geniuses, such a Robert Oppenheimer or Albert Einstein, cautioned that “the unleashed power of the atom had changed everything save our modes of thinking,” thus raising concerns that the fate of the United States was ultimately “to drift toward unparalleled catastrophe.” One only has to hear the increasingly lugubrious warnings of AI luminaries, such as Geoffrey Hinton or Yoshua Bengio, on the existential threats tied to the technologies they themselves helped spawn to be immediately catapulted back into this earlier, more angst-ridden period in human history.
In reality, the materialization of an ASI, or of markedly more evolved and broad forms of AI, could be even more Janus-faced in nature, unleashing waves of scientific innovation, as well as catastrophic dangers and geopolitical shocks. Just as Winston Churchill warned, in a final speech to the House of Commons, of the perils of humanity’s technical prowess outstripping its ethical maturity, future leaders may find themselves grappling with an even more taxing set of technology-fueled risks and moral dilemmas. The risks tied to runaway AI, for instance, may come to loom over policymakers in a manner even more suffocating than the threat of nuclear annihilation during the Cold War.
Worse still, both threats may become interlaced, as AI-enabled systems become ever more integrated with the architecture of nuclear command, control, and communications (NC3). As of now, there is no body of thought in the field of strategic studies that quite rivals that of nuclear deterrence theory in its intellectual richness, diversity, and complexity when it comes to grappling with issues such as mutual vulnerability, escalation management, technological disruption, and counterproliferation. Policymakers and strategists wrestling with questions of stability tied to AI will therefore continue to find much of value in Cold War–era discussions of issues ranging from the stability-instability paradox to mutually assured destruction to conceptualizations of limited nuclear war, flexible targeting, and graduated deterrence—provided that they engage with this literature in a discerning and discriminating fashion.
Exercising such discernment is critical, if only because there are also notable differences between both technological revolutions, and therefore limits to the nuclear analogy. One of the first, and most obvious, relates to threshold observability. It is often assumed that great-power actors will have a clear sense of the scale, scope, and speed of their peer competitors’ advancements in the field of AI and will therefore be able to identify the critical threshold prior to the advent of superintelligence.
In reality, however, both recent developments (the shock over the recent cost-efficient breakthroughs of Chinese companies such as DeepSeek) and the history of great-power competition (the 1957 Sputnik moment) suggest that it can be difficult to gauge the exact state of an adversary’s technological development, or accurately predict a paradigm shift. It was much easier, for instance, for Washington to detect that the Soviet Union had covertly detonated a nuclear device in the hinterlands of Kazakhstan—British and American weather aircraft equipped with specially designed paper filters vacuumed up radioactive samples in the days and weeks following the explosion. Even then, however, “some high-level doubters” within the US national security establishment “did not believe, or did not want to believe, that America’s atomic monopoly had come to an end.” They had to be plied with corroborating intelligence material before finally coming to terms with the new strategic dispensation.
There is unlikely to be an equivalent to WB-E9 paper filters when the first ASI is deployed. Neither party is likely to have intelligence able to fully gauge when the threshold moment has arrived, or when a recursive improvement loop has properly materialized. Moreover, there will most likely be a delayed and iterative process of acceptance when it comes to the recognition of ASI, and it may take time for us to collectively realize that we have ASI (or artificial general intelligence) “as opposed to arriving at this realization in a single moment,” especially “if it doesn’t meet our anthropomorphic conceptions of intelligence and utility.”
There is also debate over the imminence of ASI, and over the extent and amplitude of its future effects on geopolitics; with assessments and predictions—most notably over the question of whether performance will continue to scale with compute—seeming to shift on an almost monthly basis. It is certainly possible that the advent of ASI will confer a first-mover advantage through the emergence of a wonder weapon—a capability that can override cyber defenses, disable enemy command-and-control systems, and generally overwhelm, outwit, and outperform any enemy system. In such an eventuality, according to a recent report, “a nation with sole possession of superintelligence might be as overwhelming as the Conquistadors to the Aztecs.” ASI might also turbocharge scientific innovation, or vastly enhance industrial productivity, unleashing new magnitudes of economic growth and transforming the international distribution of power in its progenitor’s favor. In such a scenario, the analogy of the nuclear revolution—in its momentousness—appears compelling.
On the other hand, the nuclear analogy may risk exaggerating the immediacy of ASI’s effects. With the twin explosions over Hiroshima and Nagasaki, the atomic bomb’s seismic impact was both instantaneous and visible to all. Geopolitical shifts thereafter followed swiftly, from Japan’s surrender to the acceleration of the Cold War and the formation of countervailing alliance structures to the articulation of extended nuclear deterrence.
In contrast, even a powerful AI might not create an impact overnight. As scholars such as Jeffrey Ding have noted, history suggests that transformative technologies with broader applications (electricity, the internet) take years, even decades, to diffuse, and a state’s ability to absorb such technologies at scale is often more critical than its ability to innovate. AI usage—which appears to differ sharply by age, gender, and education levels—may therefore not automatically trigger a cascade of world-changing events without a profusion of complementary innovations and adaptations.
Then there is another key difference with the nuclear age: the challenge posed by open-source development and the lead role played by private, corporate entities. There is no conundrum equivalent in the nuclear domain to that posed by the alignment problem: ensuring a superintelligent agent’s values and actions remain durably compatible with a state’s objectives. Even one of the greatest Cold War fears (the inadvertent triggering of a nuclear doomsday device) was that posed by a rudimentary object, rather than an actor potentially vested with its own will. Thus, the core dynamics of control differ: Nuclear security ultimately boils down to controlling people (through chains of command, locks, treaties), whereas AI safety is also about controlling an intelligent system that might outfox its creators, or warp under adversarial pressure.
Advanced AI Will Accelerate and Distort—But Not Erase—Deterrence Dynamics
This injects a whole new host of challenges, and brings us to our next question—how might AI, in all its various evolutionary pathways, reshape our current understanding of nuclear deterrence? Those grappling with this complex set of issues often point to the effects the networked integration of a suite of AI-enabled technologies could have on the hider-finder competition—the longstanding contest between efforts to conceal nuclear forces and the opposing attempts to detect and target them. Through staggering advances in sensor integration, machine learning, and data processing, superintelligent AI could revolutionize different aspects of remote sensing—from image recognition to predictive modeling and continuous, automated surveillance.
A sufficiently advanced form of AI could also leverage newly emergent forms of detection technologies to devastating effect, zeroing in on infinitesimal undersea temperature disturbances, minute subatomic particles (neutrinos) ejected from missile exhausts, or evanescent wake vortices. By making concealable assets such as nuclear-powered ballistic missile submarines visible, or dispersible assets such as rail- or road-mobile intercontinental ballistic missiles more immediately fixable, the emergence of hyper-advanced AI could erase one of the structural pillars of mutually assured destruction—the survivability of second-strike capabilities, ushering in a new age of nuclear instability.
In reality, however, the advent of superintelligent AI is likely to correspond to a new, albeit accelerated, phase in the age-old hider-finder competition—a contest that, since the dawn of the nuclear age, has been characterized by a dynamic process of technological adaptation and counteradaptation. Indeed, advances in AI could come to benefit the hider as much as the finder—most notably by supercharging CCD (concealment, camouflage, and deception) capabilities. From the Trojan horse of Homeric legend to the Quaker guns of the Civil War to the ingenious inflatable tanks and sonic-deception devices employed by the US ghost army during World War II, competing militaries have long engaged in elaborate deception and information warfare campaigns in a bid to confound, redirect, or demoralize their opponents.
More recently, over the course of the ongoing war in Ukraine, both Russian and Ukrainian troops have deployed large numbers of decoys mimicking the visual, radar, and thermal signatures of military assets, with the aim of compelling one another’s forces to expend ammunition on false targets. Both warring parties have also waged aggressive cyber deception and information warfare campaigns for purposes of tactical deception or broader political disruption, with Russia displaying a particular proclivity for the use of deepfakes.
On an increasingly AI-permeated battlefield, these techniques will only grow in scale, scope, and sophistication. In addition to boosting electronic jamming and destructive cyber warfare efforts, superintelligent AI could generate a multilayered illusion or “fog-of-war machine,” splicing fake imagery into a satellite feed, feeding malicious inputs into automated detection systems, or conjuring up exquisitely designed deepfakes designed to perfectly mimic key individuals in a country’s nuclear chain of command. Embodied AI, in the form of large numbers of autonomous underwater vehicles with the ability to dynamically adjust their signatures to mimic the behavior of a nuclear-powered ballistic missile submarine, for instance, could also play an important role in physically shielding second-strike capabilities from adversaries.
The process of acceleration will result from the heightened velocity of technological change and diffusion. Even though, as during the Cold War, there will most likely be a jagged quality to this iterative cycle of adaptation and counteradaptation, truly decisive lead time advantages in the hider-finder competition may thus prove both fleeting and hard to assess in real time. Perceived escalation pathways could appear ever less predictable and linear—potentially opening up “wormholes” in the “fabric of deterrence,” to employ one vivid metaphor from a well-known nuclear strategist and former US government official, “through which competing states could inadvertently enter.”
Meanwhile, the distortionary effects of AI on deterrence stability will flow from the growing reliance on—and weaponization of—deception and confusion. As Edward Geist has aptly noted, even though “a deception-dominant world appears incompatible with strategic stability as Westerners have typically conceived of it . . . it is far from obvious what its consequences may be.” Managing this state of pervasive uncertainty will thus constitute one of the fundamental deterrence challenges of the twenty-first century. On the one hand, chronic uncertainty over the nature of the nuclear balance may well prompt destabilizing behavior in the form of arms racing and increasingly aggressive scouting and intelligence-gathering efforts. On the other, however, this very uncertainty may serve to dampen preemptive pressures and reduce first-strike incentives. More broadly, anything that impedes, even if ever so slightly, confidence that nuclear weapons are always available for authorized launch could prove destabilizing. And in general, military intelligence professionals may find it increasingly arduous to adapt their information gathering and analysis to an increasingly complex, protean, and clouded environment. Meanwhile, when it comes to US declaratory policy, government officials may wish to temper sweeping assertions about the possibility and desirability of developing the equivalent of AI “Wunderwaffen” for fear of exacerbating a two-peer nuclear competition that it no longer necessarily has the financial or industrial wherewithal to dominate.
Different States Will Erect Different AI NC3 Architectures
Much will also depend on the degree to which AI and automation are integrated into a country’s NC3 architecture. Nuclear powers ranging from China to India, France, or the United States already possess radically different force designs and declaratory postures. Over time, the role each nuclear power accords to AI in its deterrent—its AI NC3 model—may come to prove as critical to policymakers as a country’s codified attitudes toward nuclear use. To what degree will semi- or full automation be integrated into the myriad different, often interdependent, systems that comprise the United States’ or China’s NC3, from strategic warning to decision support or adaptive targeting? It is estimated that the US NC3 architecture is comprised of more than 250 individual ground, space, undersea, and airborne systems, operating at varying levels of automation. To what extent will increasingly AI-enabled conventional systems—such as advanced missile defense and data analytics systems—be entangled with core components of the nuclear deterrence architecture? With regard to the United States, this remains vague. In his March 2025 testimony before the Senate Armed Services Committee, General Anthony Cotton, head of United States Strategic Command, simply noted that AI and machine learning would be incorporated in a “tailored” fashion for purposes of “improved situational awareness and timely response,” and to “enable and accelerate human decision-making.”
Moreover, as Stanford University’s Herbert Lin has observed, the terms widely used by numerous countries in their public commitments on the military use of AI—such as keeping humans in the loop—suffer from a frustrating lack of definitional clarity and technical specificity, and it remains frequently unclear whether this “also includes the assessment of early-warning information that may be used in nuclear decision-making or in making recommendations for targeting.”
How might such an increased reliance on AI for information gathering and processing lead to more insidious kinds of risks to crisis stability? While some analysts have noted that “if properly implemented,” AI could reduce nuclear risk by “improving early warning and detection,” it is also worth noting that, without carefully designed buffers, human override mechanisms, and time-stretching policies, AI’s speed could outpace human judgment, raising the specter of inadvertent escalation initiated at algorithmic speed. As early as 1960, cyberneticists were emphasizing how stress and velocity could destabilize control hierarchies, warning that human control might be “nullified by the slowness of our actions.”
Both Washington and Beijing have committed, on more than one occasion, to maintaining tight human control over nuclear decision-making. Russia, on the other hand, has adopted a more ambiguous posture. Moscow has thus reportedly maintained its semiautomated Soviet-era Perimeter system, which was designed to automatically bypass layers of normal command authority and delegate nuclear authority downward in response to a nuclear strike disabling leadership communication networks. Some US analysts have recently argued that the United States should adopt an AI-enabled “dead hand” system of its own—both to assure retaliation and to render US nuclear signaling and threats more credible. Only Russia, however, has so far expressed a willingness to deploy fully autonomous nuclear-armed systems, in the form of the recently much-discussed Poseidon (or Kanyon), a nuclear-powered and nuclear-armed unmanned underwater vehicle under development.
This destabilizing development could be construed, in part, as a direct manifestation of Russia’s growing appetite for nuclear brinkmanship and residual fears over the credibility of its deterrent. The question is the degree to which adversaries concerned over enhanced US investments in missile defense—most notably under the aegis of ambitious new initiatives such as Golden Dome, for instance—might come to view automation as necessary components of their retaliatory capabilities. North Korea, for example, seems to have suggested that it might be interested in incorporating elements of full automation into its nuclear posture, stating in 2022 that “in case the command and control system over the state nuclear forces is placed in danger owing to an attack by hostile forces, a nuclear strike shall be launched automatically and immediately.”
The undersea dimension of the maritime environment, in particular, is likely to be perceived by nuclear-armed adversaries as the most conducive to leveraging AI and autonomy, given its uniquely challenging operational characteristics, especially with regard to the maintenance of reliable real-time communications. Chinese analysts have pointed to the potential utility of large shoals of long-range unmanned underwater vehicles in a counterforce role, and more specifically when it comes to continuously tracking or prosecuting US nuclear-powered ballistic missile submarines transiting through certain key maritime chokepoints.
Finally, there is another major risk to nuclear stability that warrants greater scrutiny: the degree to which adversarial manipulation or tampering with superintelligent AI may inadvertently degrade or disable its alignment architecture. Deceptive tactics such as data poisoning, sensor spoofing, or signal manipulation could have dangerous and cascading effects on AI behavior. If alignment features are part and parcel of an ASI’s perception models, a continuous flood of corrupted inputs could either erode or disable such features entirely. As experts have noted, AI alignment is not a static safeguard but a dynamic process, which could buckle under adversarial conditions. Something of a perverse feedback loop could emerge, whereby the more stress applied on an AI NC3 system, the more it will begin to self-optimize toward greater autonomy and defensive behavior.
This raises an important and potentially troubling puzzle for nuclear strategists; if data poisoning, signal tampering, and digital deception become ubiquitous features of the future hider-finder competition, how can one ensure that the continuous deployment of such tactics do not greatly exacerbate the risks of a runaway ASI being inadvertently unleashed due to the collateral destruction of its guardrails? A shared realization of such risks might eventually prompt nuclear powers to either tacitly decide, or openly state, that certain forms of meddling with nuclear-adjacent or nuclear-enmeshed ASIs are off limits, for fear of catastrophic alignment collapse.
Finally, it is also worth considering how the proliferation of advanced AI to nonnuclear states—including technologically developed nonnuclear US allies—may impact nuclear stability and escalation. For example, how might the diffusion of more exquisite ASI-enabled tracking and targeting systems to the Republic of Korea, a country that already quietly subscribes to a conventional counterforce strategy vis-à-vis its quarrelsome northern neighbor, affect crisis stability on the Korean peninsula?
The Nonstate and Private Actor Dimension
Another key difference with the early nuclear age is the problem posed by private sector and open-source development. The scientists of the Manhattan Project or of its later, Soviet equivalent, Institut A, toiled in monastic, state-imposed isolation. Recent advances in AI on the other hand (to include China’s DeepSeek), are in large part attributable to the open sourcing of many foundation AI models. Until recently, US AI executives and companies appear to have had wildly divergent—and fluctuant—attitudes as to whether it is more commercially judicious to favor a proprietary, closed-source development or to engage in a more open-source release strategy. The current administration of President Donald Trump, for its part, has clearly stated that it is supportive of open-source and open-weight development, largely in order to accelerate the global diffusion and adoption of American, rather than Chinese, AI technology. The issue of whether the scientific benefits of an open-source approach are worth the risks is one of the most consequential of our time. As is the question of whether the United States, China, and other emergent AI powers might be willing to work together in order to prevent the proliferation of advanced AI model weights to malign entities.
Meanwhile, commentators have drawn attention to the growing role played by the US private sector in defense innovation, and to the fact that, notwithstanding the economic benefits of such a model, there are sizeable national security risks tied to becoming overly dependent on a handful of corporate actors. Concerns have grown in some quarters on the potential diffusion of decision-making power away from the state, and the possibility of crucial developments in military technology unfolding beyond the control, or even knowledge, of the US government. This knowledge gap could jeopardize arms control efforts. Indeed, during the Cold War, arms control arrangements were predicated on a shared willingness to accept greater transparency, and negotiated on the basis of a granular understanding of the composition of each party’s respective nuclear arsenals.
Moreover, private AI developers, governed by mercenary incentives, may seek to privilege innovation over alignment, or to evade export controls by proliferating sensitive technology, thus undermining an increasingly tenuous techno-military balance. In times of crisis, private actors might also freelance, whether for ideological or self-serving reasons, in ways detrimental to US national security interests; refusing to share a key technology with the US government, or engaging privately in acts of aggressive cyber espionage against foreign state-affiliated rivals that inadvertently catalyze conflict.
Finally, US private sector AI leadership also means that vital infrastructure and talent are outside secure government facilities, rendering them more vulnerable to espionage, sabotage, and disruption. Both the United States and China have engaged in extensive cyber espionage campaigns to steal AI-related data and intellectual property, and Russia has embarked on an campaign of industrial sabotage across the European continent. If sensitive military AI algorithms or chip designs reside on corporate networks, an adversary might hack those networks as a way to hobble an opponent’s future capabilities. Such actions, if discovered, carry escalatory implications: Hacking a private company’s servers or poisoning its data in peacetime could be seen as a serious provocation.
So could the foreign targeting of lead US AI scientists—whether through espionage, coercion, or assassination. It remains unclear whether this uncomfortable situation—whereby AI labs developing potentially transformational military technologies are still operating like commercial tech companies—is sustainable in the long run. At some stage, suggests one expert a “tiered risk governance framework” may become necessary, one that “distinguishes between levels of danger and scales regulatory demands accordingly,” with relatively low-risk models remaining more or less unregulated, but higher-risk models necessitating “something closer to military-grade governance.”
In the absence, however, of a US declaratory policy that declares certain privately owned companies as off limits, or that openly stipulates that certain core elements of compute infrastructure, for instance, would fall under its nuclear umbrella in times of war, the United States’ private sector dominance in the military-technological sector may come to constitute a dangerous seam in its deterrent—one that more statist adversarial societies might be all too ready to exploit. Some of these challenges are not wholly novel in nature. Contrary to the conventional wisdom touted by foreign policy pundits such as Ian Bremmer, the global order has not been solely “defined by states since the Peace of Westphalia.” From the transnational economic and military networks of the medieval knightly orders to the powerful mercenary companies of the Renaissance to the empire-building shareholders of the East India Company, potent nonstate actors have regularly risen to openly challenge the nation state’s monopoly of violence and order building. In some cases, these nonstate rivals have been tolerated, while in others they have been co-opted, absorbed, or even exterminated.
In times of existential struggle, nation-states have requisitioned or nationalized private economic assets; President Abraham Lincoln nationalized the telegraph network and requisitioned railways during the Civil War, and the US government took control of the Merchant Marine during World War II under the aegis of the newly established War Shipping Administration. Similarly, in the event of a great-power conflict with China, the United States could conceivably invoke the 1950 Defense Production Act or other emergency powers to compel private AI companies to support its military needs.
What is unprecedented is the way in which the prevalence of the private sector in AI may complicate nuclear deterrence, as the dual-use technologies companies produce become ever more interwoven with America’s evolving NC3 architecture. Indeed, we currently possess no adequate historical analogy or frame of reference as to how to manage this complex intermingling. During the Cold War the slow intellectual maturation of deterrence theory unfolded against the stabilizing backdrop of unitary state actors exerting tight control over physically measurable nuclear arsenals. In the twenty-first century, devising a framework for nuclear stability will need to account both for the rapid diffusion of dual-use technologies and for the presence of nonstate actors with divergent goals and degrees of independence from state authority. This may require, as Colin Kahl and Jim Mitre have argued, that the United States devise tailored forms of public-private partnerships or security compacts. For example, in exchange for the US government being given much greater insight into and faster access to privately developed frontier models, AI companies could be provided with intelligence on foreign adversaries, favorable access to cloud-computing resources, or Department of Energy–run federal sites with the kind of infrastructure (in the form of nuclear power plants) that large-scale AI data centers require.
This constitutes another interesting parallel with the nuclear age: the custodial role played by the Department of Energy (and its predecessor the Atomic Energy Commission), with the government providing support in terms of access to land and resources for nuclear projects, while ensuring security through regulation and oversight. Indeed, as the nuclear and AI industries enter into a more symbiotic relationship, the department’s Office of Nuclear Energy is poised to play a critical role in shaping the trajectory of public-private AI partnerships in the United States.
Finally, the role played by the US private sector in the development and maintenance of a strategic dual-use technology may necessitate the public articulation of a series of red lines and deterrence thresholds, whereby some private AI laboratories are declared off limits in the event of conflict. The Department of Homeland Security‘s current list of sixteen critical infrastructure sectors may also need to be expanded to include certain precategorized strategically sensitive frontier AI laboratories and data centers.
In 1946, Frederick Dunn, scholar of international relations, commented on the perverse duality of the dawning nuclear age: “Like all physical forces, [nuclear power] was morally indifferent and could just as easily serve evil purposes as good. Unless some means can be found for separating out and controlling its powers of annihilation, the scientists’ most striking victory of all time threatened on balance to become the heaviest blow ever struck against humanity.” Like the nuclear revolution, ASI presents both extraordinary potential for scientific and societal advancement as well as profound existential risks. Just as nuclear weapons necessitated a radical rethinking of international relations, arms control, and deterrence theory, the putative emergence of ASI similarly challenges traditional concepts of power, control, and stability in the modern world.
As strategists and policymakers grapple with these challenges, it is vital that they both draw insights from the atomic age and recognize the unique dynamics of AI. In particular, the lead role played by private industry will require not only new forms of public-private partnerships, but also a rethink of core aspects of US approaches to deterrence, escalation management, and homeland defense. There is an urgent need, in particular, to think through the risks and opportunities tied to the growing enmeshment of AI with nuclear command-and-control architecture. While AI may profoundly affect various aspects of the nuclear competition, it is unlikely to fundamentally revolutionize its core characteristics. The balance between detection and concealment may not necessarily skew automatically in favor of exquisite counterforce—and advanced AI seems more likely to accelerate and distort extant deterrence dynamics than wholly eliminate them. As in Ukraine, there will be a saw-toothed aspect to these iterative cycles of competition, with AI sometimes favoring the finder, at other times favoring the hider. One thing is certain, however: This is an area of study that will only become more essential over time, which will require the sustained attention of the defense policy world’s finest minds. Absent such attention, as that most visionary of early science fiction writers Aldous Huxley warned, “technological progress” may only provide us with “more efficient means for going backward.”
Iskander Rehman is an independent defense and foreign policy analyst and a senior visiting editor at War on the Rocks. Research for this essay was conducted while Dr. Rehman was a senior political scientist at the RAND Corporation. The author is grateful for the detailed feedback provided by Jim Mitre, Matan Chorev, Joel Predd, Rebecca Hersman, Edward Geist, and Ankit Panda on an earlier version of this work.
The views expressed are those of the author and do not reflect the official position of the United States Military Academy, Department of the Army, or Department of Defense.
