Some of military history’s most decisive victories didn’t come from perfect planning but from bold risks whose chances of success were so slim that no modern algorithm would ever recommend them. Still, we remember commanders who seem to have achieved the impossible at considerable indifference to overwhelming odds. Call it the hold my beer moment: Miltiades at Marathon, Chamberlain’s bayonet charge at Little Round Top, or Eisenhower launching D-Day through a narrow weather window. These were calculated risks that challenged the odds, surprised the enemy, and changed the course of history.

Today, as people increasingly rely on artificial intelligence and decision-support systems to guide their choices, they form courses of action based on extensive data. However, machines and large language models are designed to favor statistical methods with higher success rates over Clausewitzian calculations of chance, moral forces, and human instinct that seek to seize fleeting opportunities. The risk for military commanders is that, in the name of harnessing AI, we might lose the willingness to make bold, high-risk decisions in the moment, especially if AI recommends otherwise. If we entrust war to the machine’s logic, we may win battles of efficiency but lose the wars of will. For all its remarkable capabilities, artificial intelligence lacks the human will to dare.

There is now potential for the military to increasingly use AI tools like large language models (LLMs) to quickly and effectively integrate intelligence and to model courses of action for commanders. An LLM-embedded military decision-making process can identify, analyze, and integrate vast amounts of data that far surpasses the capabilities of our modern planners. Indeed, thinking and writing without AI tools is like shooting a rifle without a scope. While the modern AI-enabled staff can develop plans and courses of action with speed and understanding far beyond what our experienced officers staring at a map can do, we wonder if there is an accompanying risk to audacity. But the real question is not whether AI will replace audacity. It is whether militaries will design, integrate, and culturally absorb these tools in ways that preserve (or could undermine) the human capacity for bold judgment under uncertainty.

Who Dares Wins

AI can never calculate the primal horror, fear, and chaos of battle. Combat consistently demands decisions under extreme uncertainty and cognitive stress. Actions such as participating in a room-clearing stack, advancing across exposed terrain, or closing with an adversary require decisive action when calculation alone is insufficient. Carl von Clausewitz reminds us that boldness is the very steel that gives the sword its edge and brilliance.

Audacity is not the same as recklessness. Risk calculus has always mattered in military decision-making. Clausewitz famously emphasized chance, friction, and moral forces—factors that are hard to calculate but cannot be ignored. Many of history’s audacious battlefield decisions were anything but impulsive. The Normandy invasion was preceded by years of planning, intelligence collection, and long-term deception operations. Eisenhower’s fateful decision to go on June 6, 1944 was bold precisely because it rested on an informed appreciation of uncertainty, not ignorance of it. The gamble lay not in disregarding analysis, but in accepting its limits.

Marathon: Betting Everything on Shock

While there is no sure way to win a battle when outnumbered, there is one guaranteed way to lose: Do nothing. In 490 BCE, a small force of ten thousand Athenians and their allies faced a Persian army of over twenty thousand at the Battle of Marathon. Any gambler would have bet on the Persians in this fight, and perhaps AI would have too. They had numerical superiority, advantageous terrain, and momentum at their backs, having just won a victory over another Greek city-state called Eretria, where they enslaved its population. The Persians expected yet another easy win. At first, the Athenians stayed in their camp on high ground, watching the massive Persian force assemble on the beach in front of their position.

Yet, as the battle commenced, the Athenians chose not to hold the hill and fight on the defensive. Instead, they did the unthinkable: They left the high ground and charged a numerically superior force.

The charge of the Athenians seized victory from the jaws of defeat and has resonated throughout history, remaining alive in university and military academy classrooms. This charge was perhaps one of the most famous gambits that defied conventional wisdom in Western military history. The Greeks quickly closed the distance with their Persian enemies on the plain of Marathon and collided with their lightly armored ranks. The Persians were not given the chance to wage the battle they preferred. They were not allowed to rain arrows down on the Greeks or use cavalry. Instead, the Greeks fought this battle at close quarters. The Athenians unleashed a human tidal wave of bronze and sinew. The charge of the Athenians amplified the existing strength of the phalanx, the advantage of hoplite equipment, and the initiative of the Greek general Miltiades.

On the display of any AI decision tool, the Athenians’ chosen course of action would have been painted red—high risk, low probability, avoid. But on the field of battle, it worked. The sudden shock broke the Persian line, and the Persians fled to their ships. The gamble saved Athens, preserved Greek independence, and indirectly set the stage for the rise of Western democracy.

Little Round Top: Bayonets Against the Odds

Fast forward to July 2, 1863, the second day of the Battle of Gettysburg. On the Union Army’s far left flank, Colonel Joshua Lawrence Chamberlain and the 20th Maine were ordered to hold Little Round Top “at all hazards.” By this point in the Civil War, the 20th Maine was not a full regiment. After months of hard campaigning, sickness, and casualties, Chamberlain’s ranks had been badly depleted—barely 350 men were prepared to defend the rocky hill that anchored the entire Union line. Facing them were waves of Confederate assaults from seasoned troops who were veterans hardened by years of combat.

As the day dragged on, the 20th Maine fought through heat, exhaustion, smoke, and chaos. The Union soldiers repelled charge after charge, firing until their ammunition nearly ran out. It seemed that the situation was hopeless. Reason—and any modern algorithmic decision-support system—would have recommended withdrawal. Their line was thin, their flank exposed, and their cartridges almost depleted. But Chamberlain understood what machines can’t: the intangible factors of momentum and morale. A retreat here could unravel the entire Union position. He grasped in that moment that the only way to hold was to attack. When the next Confederate charge climbed the slope, Chamberlain gave an audacious order: Fix bayonets.

With a shout the remnants of the 20th Maine surged forward in a wheeling charge, crashing into the stunned Confederate lines. So unexpected was this counterattack that the commander of the 15th Alabama, Lieutenant Colonel William C. Oates, believed Chamberlain must have received reinforcements. In truth, no reinforcements had come; Chamberlin and the 20th Maine were out of everything except raw human courage. That daring charge broke the Confederate assault, took dozens of prisoners, and protected the Union flank. It was a bold gamble that no algorithm would ever support. Yet that single human decision, made in chaos and courage, helped tip the scales not just of a battle, but of a war that changed America for the better.

D-Day: Through the Weather Window

In the days leading up to the D-Day invasion, Allied commanders studied meteorological charts filled with bad news. The weather over the English Channel was stormy and unpredictable—high winds, low clouds, and heavy seas battered the invasion fleet’s staging areas. Landing 156,000 troops, thousands of vehicles, and mountains of equipment under such conditions seemed impossible. Logic suggested a delay.

The safer call, and indeed one that many staff officers urged, was to wait for a better window. A decision-support system that assessed probabilities probably would have recommended the same. But General Dwight D. Eisenhower understood something no machine could quantify: the intangible costs of hesitation on his armada. Secrecy and the deception campaigns were already stretched to their limits. Each day of delay gave the Germans more time to reinforce beaches, mine approaches, and strengthen defenses. Waiting for perfect conditions could mean missing the only fleeting chance for surprise.

The Germans, for their part, were confident that no invasion was imminent. Their meteorologists, cut off from Atlantic weather data, predicted that the storm would last for days. Field Marshal Erwin Rommel left his headquarters to celebrate his wife’s birthday, convinced that the Channel seas made invasion impossible. So confident were the defenders that German panzer divisions were placed under strict higher command and could not move without the approval of Hitler, who, asleep at his headquarters, was not woken until hours after the landings began. But the Allies, aided by Atlantic weather data the Germans lacked, detected a narrow thirty-six-hour break in the tempest. Eisenhower seized it. At 4:15 a.m. on June 5, after long silence and visible strain, he said, “Okay, let’s go.”

The gamble paid off. The storm still raged, but the Germans were unprepared; their defenses were manned at half strength, their armor still held in reserve. A commander overly reliant on AI’s calculations would have waited for clear skies. Eisenhower read the chaos and chose audacity over caution. That single leap through the storm changed the fate of the world.

Clausewitz, Chance, and Moral Forces

Clausewitz stated that war is influenced by violence, chance, and reason, a remarkable trinity that connects the passions of the people, the unpredictability of the commander, and the political goals of the state. In contrast, AI, by design, will try to tame chance and make reason dominant. It will smooth out the volatility of human emotion, compress uncertainty with better data, and offer courses of action that minimize risk. In doing so, it could fundamentally alter the balance of the trinity. The trouble is that the irrational element—the willingness to accept great risk to win—has often been the spark that turns a stalemate into a victory. Machines will calculate and weigh probabilities, but they cannot recognize the fleeting moment when risk becomes opportunity.

This distinction matters because contemporary debates about AI often collapse judgment and calculation into a false binary. Decision-support systems do not make decisions. They structure information, generate options, and illuminate trade-offs. Whether they encourage caution or enable boldness depends on how commanders use them, and how institutions reward or punish risk.

In some cases, AI may actually enable audacity. Better situational awareness, faster data fusion, and improved logistics forecasting can give commanders the confidence to accept risks they might otherwise avoid. A clearer understanding of adversary vulnerabilities or operational constraints can expand, rather than narrow, the menu of feasible options. Historically, uncertainty has not always bred boldness; it has often produced paralysis. Clausewitz might have recognized AI as a tool, but he would warn against letting it reshape war into a purely rational exercise devoid of passion.

The Risk of Algorithmic Caution

Modern militaries introduced analytical tools, staff processes, and decision aids in part because human judgment is fallible, prone to overconfidence, groupthink, and wishful thinking. AI is the latest iteration of a long effort to discipline those weaknesses. The risk is not that militaries will become too rational, but there is a legitimate danger of automation bias. Humans tend to defer to systems that appear authoritative, especially under time pressure. If decision-support tools consistently privilege probabilistic success, minimized losses, or institutional risk aversion, commanders may find it psychologically and professionally harder to override them, even when circumstances demand it. Over time, this can reshape organizational norms, subtly redefining what reasonable risk looks like.

Decision-support systems will be highly effective for specific tasks: quickly analyzing battlefield data, optimizing logistics and force deployment, and simulating likely enemy reactions. However, they will also overlook certain critical factors by potentially underestimating the psychological effects of bold actions, overvaluing numerical safety, and failing to grasp aspects that refuse to present as data, like morale, willpower, and fear. In other words, they will be bad at recognizing and exploiting moments when audacity will be rewarded, not because the probabilities are wrong, but because such moments defy calculation.

The problem, then, is not AI per se, but how militaries encode risk into their tools and cultures. Algorithms are not neutral. They reflect the assumptions, priorities, and incentives of the institutions or personnel that design and deploy them. A force that prizes force protection above mission accomplishment will build different systems than one that rewards initiative and accepts calculated losses. Technology will amplify those preferences, not replace them.

This is where our historical analogies require caveats. It is tempting to contrast heroic gambles that succeeded with hypothetical algorithmic caution that would have prevented them. But history is also littered with audacious failures as well as successes: Gallipoli in World War I and Operations Market Garden and Barbarossa in World War II. These great gambles all involved boldness and commander’s assumptions exceeding realistic assessment that would not have required AI to ascertain. Survivorship bias and the martial appreciation for heroism and glory can skew perceptions of the results of audacity.

If commanders become accustomed to relying on the machine, their risk tolerance may decrease, especially if an institutional postmortem after a failed operation cites AI’s probabilities that counseled caution as a reason to second-guess the commander’s boldness. Over time, armed forces might shift toward strategies that are more predictable, and predictable opponents are easier to defeat. The solution isn’t to reject AI. Its ability to gather and process information quickly is a gift no modern commander should overlook. However, we must intentionally shape doctrine, training, and command culture so that AI recommendations are considered, rather than replaced, by human judgment.

Preserving Audacity in the Age of Algorithms

The deeper issue is command responsibility. No algorithm bears moral or strategic accountability for failure. That burden rests with human commanders and political leaders. If institutions begin to treat AI recommendations as default answers rather than inputs to judgment, responsibility becomes blurred. Decisions can start to feel validated by systems rather than owned by commanders. In such environments, audacity does not disappear, but it might become subtly institutionally discouraged. This dynamic is already visible in fields beyond the military profession. Financial markets, medical diagnostics, and aviation all wrestle with the tension between automation and professional judgment. In each case, the most resilient systems are those that deliberately preserve human override, cultivate skepticism toward automated outputs, and train professionals to understand not just what systems recommend, but why.

For militaries, this implies several practical imperatives. First, AI systems should be designed to surface uncertainty, not obscure it. As a good intelligence professional might do, the AI should highlight confidence intervals, assumptions, and data gaps. Such programmed transparency would reinforce the reality that judgment is still required. Second, military education should explicitly address how to disagree with machines. Teaching officers when and how to override decision aids is as important as teaching them how to use them. Third, organizational incentives matter. If promotion, evaluation, and after-action processes punish deviation from algorithmic recommendations (even when outcomes justify it), commanders will learn to conform. Conversely, if institutions reward informed risk-taking and honest failure, audacity remains possible.

Technology cannot compensate for cultures that fear command responsibility or algorithmically reduce it. War remains a profoundly human enterprise, shaped by will, perception, and emotion as much as by calculation, plans, and training. Clausewitz’s friction has not disappeared; it has migrated into new dimensions, including cyber, information, and machine-human interaction. In the AI age, resisting that pull toward machine-based recommendations will require deliberate effort. AI may assist our ability to reason, but it cannot feel the tremor in the chest before a charge, the weight of duty that defies odds, or the surge of courage. Audacity will not vanish because algorithms exist. It will vanish only if institutions allow judgment to atrophy behind the appearance of optimization. The challenge is not to choose between AI and audacity, but to ensure that one does not quietly crowd out the other.

The history of Marathon, Little Round Top, and D-Day shows us that some victories come only to those willing to take a fateful plunge. AI will change the character of war, but it must not strip away its art. The soul of victory has always belonged to those who dare.

Hold my beer, indeed.

Antonio Salinas is an active duty US Army officer, professor of strategic intelligence at the National Intelligence University, and a PhD student in the Department of History at Georgetown University. Salinas has twenty-seven years of military service in the US Marine Corps and Army, as an infantry officer, an assistant professor in the Department of History at the US Military Academy, and a strategic intelligence officer, with operational experience in Afghanistan and Iraq. He is the author of Siren’s Song: The Allure of War, Boot Camp: The Making of a United States Marine, and Leaving War: From Afghanistan’s Pech Valley to Hadrian’s Wall.

David V. Gioe, PhD, is a visiting professor in the King’s College London Department of War Studies and academic director of the Cambridge Security Initiative, where he co-convenes the International Security and Intelligence program. He previously served as an associate professor of history at the US Military Academy at West Point and as a history fellow with the Army Cyber Institute. He holds a PhD in politics and international studies from the University of Cambridge and is an elected fellow of the Royal Historical Society. He is a Navy veteran.

The views expressed are those of the author and do not reflect the official position of the United States Military Academy, Department of the Army, or Department of Defense.

Image credit: Sgt. Olivia Cowart, US Army