Modern warfare generates mountains of data from satellites, drones, and other sensors. The sheer volume of data makes identifying meaningful patterns extremely difficult. This significant problem requires a deliberate conceptual solution to address it. Enter the strategic centaur.

Consistent with the third offset strategy, this concept aims, at the strategic level of war, to reconcile human intuition and cognitive computing within modern conflicts’ dynamic and unpredictable nature. Western approaches to warfare currently lack a sufficient concept for AI integration at the strategic level of war. Without a guiding framework for strategic integration, AI adoption risks operational and tactical missteps that will only increase the rate of human carnage. Adapting military organizational processes, planning culture, and senior education to harness the cognitive power of AI is essential. The result? Faster, more accurate strategic decisions and, ultimately, a strengthening of warfighter lethality.

This discussion is not a philosophical exercise. It is a call to action based on observed hybrid performance, real-world experience in both warfighting and strategic planning from the corps through the combatant command level, and over a decade of research with AI as an emerging technology.

The Strategic Centaur

Future warfare will see humans augmented by machines with access to curated, authoritative, and timely data operating as strategic centaurs. Paul Scharre, senior fellow at the Center for a New American Security, introduces the concept of centaur (a being with the head and torso of a human and the body of a horse) warfighting, arguing that “the best systems will combine human and machine intelligence to create hybrid cognitive architectures that leverage the advantages of each.” Scharre points to chess as a helpful analogy. Cooperation between humans and machines produces a more perfect game—better than each agent could have performed alone. Minotaur (a creature with the head of a bull and the body of a human) warfighting, on the other hand, demonstrates machine-led control over human activity. Minotaur relationships are commonplace in Amazon’s fulfillment warehouses—where humans follow the direction of complex algorithms that determine which goods must be shipped and to which addresses. In this relationship, as Robert Sparrow and Adam Henschke explain, “human beings are thus reduced to being the hands of the machine.”

Physical and cognitive human-machine integration are battlefield realities now and will continue to grow in application. At the tactical level, for example, the US Army’s Integrated Visual Augmented System provides real-time tactical data through mixed-reality goggles. Complex algorithms and data processing will naturally evolve a minotaur dynamic. Minotaur warfighting, wherein the machine determines the target and presents limited criteria for human intervention, will erode the proclivity for the centaur model’s human control in a time-constrained environment. Notably, during centaur chess, when the player has between thirty and sixty seconds to make a move, the “human does not add any value,” and the centaur model breaks down. There will be a natural evolution toward, and affinity for, minotaur warfighting as the speed of decision-making will provide a tactical advantage with human involvement seen as a limiting factor. At the tactical and operational level—these may be appropriate evolutions. At the strategic level, however, a strategic minotaur—a generative AI general officer, in a sense—presents moral and ethical complexities that data alone cannot fully measure. The requirement for strategic centaurs, therefore, is unavoidable in the near term. They are already here—and humans must begin training for hybrid integration.

Hybrid Intelligence and the Strategic Level of War

Joint doctrine consolidates national policy, global strategy, and theater strategy. In practice, this means that the secretary of defense translates national policy into strategic objectives, while combatant commanders and theater commanders align the elements of military power within their geographic areas to support strategic objectives through campaign plans and long-range planning guidance. Army doctrine, by contrast, bifurcates the strategic level of war into national strategic and theater strategic processes. This nuance is useful for applying military power in the land domain—providing a mechanism to clearly understand warfighting responsibilities in addition to other obligations inherent to command of a theater. Administrative obligations and responsibilities are confusing during transitions from competition through conflict, and yet essential to applying combat power. V Corp’s experience in the transition from competition to crisis in 2022, for example, highlighted these tensions. These legal nuances detail authorities and responsibilities that empower commanders to make decisions.

Hybrid intelligence will impact across the competition continuum—arguably with the greatest impact at the theater strategic level. In competition hybrid intelligence at the corps and theater level will enable rapid cognition through machine advising to (1) prepare command authorities, (2) achieve positions of advantage to deter by denial or prevail in conflict, and (3) advise on conflict initiation to achieve strategic effects. When competing with adversaries with access to similar forms of AI, the decision to initiate war is crucial because it can determine who captures the initial strategic and operational initiative. Given AI’s potential to increase the tempo of war at the operational and tactical level, the state that gains the initiative will have the potential to exploit that advantage to decisively shape conflict outcomes.

AI and the Operational and Tactical Employment of Forces

Strategic decisions in an AI-enabled, high-tempo battlespace will drive outsized operational and tactical impacts. Once a strategic pathway is determined, the force who can plan, prepare, and conduct sustained tactical actions across all domains to accomplish a strategic effect faster than its adversary will maintain a dominant position in aligning incentives to terminate conflict. This makes physical actions—whether at the squad or the brigade level—potentially strategic. Precise coordination, machine-augmented analysis, and real-time adjustments increase the probability of achieving decisive effects. But this process begins with cognition. The side that achieves decision dominance owns the tempo and dictates the terms of the fight. Rapid cognition allows commanders to forecast, anticipate, and align tactical actions with strategic objectives along critical paths. Near instantaneous data and information collection and transmission will overwhelm human ability to process alone and will thus require human-machine integration to achieve levels of understanding faster than the adversary. The risk is notable. If data is not trusted or is susceptible to corruption, rapid decision-making will only result in rapid failure. However, those nations capable of developing processes to train machines and humans to integrate information via trusted, enterprise-level solutions will have a marked advantage in making informed and accurate strategic decisions.

By way of historical example, Marc Bloch, a French historian who served in World War I and as part of the French 1st Army Staff early during World War II, recorded many strategic conversations throughout May 1940—preceding the fall of France in June that same year. His book, Strange Defeat, records these conversations and his analysis. His insights remain prescient—and more foreboding in a hyper-accelerated information space. In commenting on the strategic failures of French leadership, he specifically notes an inelasticity of mind and lack of imagination, as well as the German “triumph of intellect,” which were antecedent factors for watershed operational success. Laggard and inelastic thinking at the strategic level compounded French inefficiencies, leading to defeat. Imagine these same dilemmas orchestrated by adversaries acting at electronic speed—falling behind cognitively, particularly at the strategic level, risks operational and tactical defeat. The player who falls behind in the AI-enabled decision-dominance game may never catch back up.

Human-machine integration is the centerpiece of this process. While AI accelerates analysis and prediction, human judgment provides the nuance, creativity, and strategic foresight essential for decision dominance. Machines enhance speed and precision, but humans ensure the right decisions are made, creating path dependencies before rounds are fired that align potential tactical actions with strategic goals in an increasingly fast-paced battlespace.

Today’s strategic leaders must develop instincts about their AI systems—recognizing when their machines detect patterns or risks others might miss. AI processes data faster than any human but lacks the ability to understand, adapt, or make ethical judgments. Humans remain indispensable for integrating the data-driven insights of AI into the broader context of strategy and morality or when acting in complex environments containing unknown problems with unknown solutions. Those able to train humans and machines within and across command echelons with curated data will achieve leap-ahead offsets over competitors. Within the next ten years, developing the strategic centaur will prove critical in out-maneuvering competitors.

The Vampire Fallacy and the Strategic Centaur

Those who have served in the military for twenty years or more will recognize the danger of over-promising technological superiority as a panacea to operational supremacy. The tendency to think that battlefield omniscience gained through superior technology and sensor proliferation can overwhelm an adversary via precision attacks is cyclical in Western approaches to warfare. Retired Lieutenant General H. R. McMaster termed this concept the “vampire fallacy”—because it just won’t die. In societies where casualty aversion remains a virtue, the tendency to develop approaches to warfare that trade nonhuman systems for human ones is always preferable. This has led to an overconfidence in technology’s ability to achieve decisive results. The last cycle to promote the belief that fewer, technologically sophisticated systems can achieve more significant, strategic effects than massing manpower was termed “effects-based operations” (EBO). Evolving out of the First Gulf War in the 1990s, EBO began to lose its central place in US doctrine in 2003—and was formally removed in August 2007.

So, how is the adoption of AI different from EBO? How does the strategic centaur framework mitigate the risk of falling into a similar trap? By pairing human judgment with machine precision, the strategic centaur offers a counterbalance to a dangerous human overconfidence in machine supremacy with simultaneous machine restraints on human fury. Israeli operations in Gaza provide a useful example. An Israeli AI system called Lavender has tracked the names of nearly every individual in Gaza and cross-referenced each identity with other information feeds to determine the probability of an individual’s participation in a terrorist group. If Lavender determined with a 90 percent probability an individual was a member of terrorist group, it sent the target package to a human analyst—who spent an average of twenty seconds on each package—for final review. As expected, the review was perfunctory—often limited to only ensuring the target was a man. As reported by the Washington Post, Lavender’s AI targeting increased the lethality of Israeli strikes, supplanting strategic oversight and individual reasoning for technological prowess. The consequences may have produced desired operational effects, but strategic calculations concerning meaningful war termination and subsequent peacebuilding were likely absent—and are now potentially more difficult. Furthermore, some suspect the focus on AI weakened Israeli intelligence capabilities in ways that are only now being rectified. This debate, with ethics at its core, illustrates the importance of the strategic centaur concept.

As discussed, the tendency toward minotaur relationships at the operational and tactical level is expected, particularly when emotional fury erodes operational discipline. The strategic centaur seeks to combine the best of humans and machines, thus leveraging their advantages and guarding against each other’s vulnerabilities. By integrating machine-driven insights into senior leader decision-making processes, the strategic centaur acts as a bulwark, preventing the innate tendency to outsource difficult decisions to machines in the name of efficiency. The strategic centaur ensures that humans remain central to the decision-making process, leveraging AI to enhance cognition and tempo without surrendering control to it. This integration expands cognition, enabling military leaders to better understand operational actions’ strategic effects, and avoids the unintended consequences of overreliance on automation.

Strategic leaders must recognize two dangers as this technological inflection impacts how militaries fight. First, algorithmic speed and data processing will naturally drift armies toward minotaur warfare. If the machine can do it better, why would humans get in the way? This must be understood and resisted at the strategic level. In peripheral, proxy, and irregular conflict, undisciplined AI integration will seek to apply technological superiority, maneuvering in the information space, and applying overwhelming lethal effects without an adequate understanding of termination criteria. Similarly, between peer competitors in large-scale combat operations, the lack of a strategic concept for AI integration will produce a noble effort to expose the adversary’s inevitable flank, and maneuver or strike with precision and unmanned mass across all domains. However, the precise massing enabled by AI is countered by adversarial AI that negates the ability to maneuver into positions of advantage unseen because of battlefield omniscience. The strategic centaur framework seeks to integrate human judgment with machine precision and speed, thus ensuring that AI-enabled warfare remains grounded in ethical and strategic considerations. In both large-scale combat operations and irregular warfare, this hybrid approach enables militaries to exploit the advantages of AI while avoiding the overconfidence and missteps that have plagued past attempts to integrate technology without a proper concept for employment.

Second, AI’s weaponization will enhance the first mover’s potential to seize the initiative and leverage tempo and speed for a decisive advantage. Increasing the tempo of warfare, however, increases the risk of unintended escalation. The perception of a first-mover advantage may predispose military leaders to early uses of violence or threatening force posture that may foster an unbalanced security dilemma. For this reason, Carl von Clausewitz’s military genius has the “gifts” of both intellect and temperament. The concept of courage, born of physical exertion and suffering, combined with novel ingenuity, may be uniquely human, incapable of machine augmentation, and the best defense against a purely algorithmic adversary—or our own hubris. Warfare will become a violent and rapid illustration of human value judgments, sacrifice, and ethics. The United States armed forces must adopt AI at pace because our adversaries will. However, it must do so with eyes wide open to the fallacy that war can be sanitized of either its violence or its humanity.

William J. Barry, PhD is the professor of emerging technology in the Center for Strategic Leadership at the US Army War College.

Colonel Chase Metcalf is an assistant professor in the Department of Military Strategy, Planning, and Operations at the Army War College and deputy director for the Ukraine War Integrated Research Project.

Lieutenant Colonel Aaron “Blair” Wilcox is an assistant professor and deputy director in the Strategic Landpower and Futures Group at the US Army War College.

The views expressed are those of the authors and do not reflect the official position of the United States Military Academy, Department of the Army, or Department of Defense.