In January 1991, coalition forces dismantled Iraq’s command-and-control network with remarkable speed. That success did not rest on a single breakthrough technology or superior platforms. It rested on something more decisive: shared understanding across organizations, functions, and national boundaries. Leaders and staffs had unified mental models—rooted in doctrine, institutional experience, and an understanding of the problem—enabling disciplined initiative and decentralized execution without constant coordination. The result was operational coherence at speed.
Three decades later, leaders must figure out how to maintain shared understanding as artificial intelligence reshapes how organizations sense, decide, and act in a joint operation. AI accelerates collection, analysis, and dissemination of information. But as the joint force integrates AI tools and seeks to leverage these unprecedented advantages, the goal is not solely about adoption, but rather ensuring that speed produces coherent action rather than divergence. Data flows continuously. Decisions are pushed closer to the tactical edge. Yet research and operational experience suggest that speed alone does not improve outcomes. AI can accelerate error, amplify disagreement, and reinforce misalignment when trust in the machine outpaces shared understanding among human decision-makers. Therefore, leaders must govern these processes and provide frameworks for their staffs to keep shared understanding intact while moving at machine speed.
Speed and Visualization Do Not Equal Shared Understanding
Organizations often assume that dashboards, real-time analytics, and AI-enabled decision aids naturally create shared understanding. This is a misconception. These tools improve visibility. That is not the same thing. Situational awareness—knowing what is happening—is distinct from shared understanding, which includes agreeing on what it means and how to respond. Teams can observe the same data and reach incompatible conclusions because they operate under different assumptions, authorities, incentives, and mental models. AI can accelerate this divergence if not carefully managed. When interpretation differs, machine-speed analytics do not resolve friction; they sometimes amplify it. AI systems shape attention, filter conditions, and weight information. If poorly framed, they can steer users away from the commander’s intent or the original operational problem, quietly degrading shared understanding across echelons.
Many AI investments focus on accelerating existing processes, automating reports, compressing staff cycles, and speeding analysis, without first establishing a common frame of reference in terms of shared understanding. The result is faster execution of misaligned decisions: tempo without coherence. Shared understanding exists when commanders, staffs, and partners interpret information in compatible ways, understand one another’s constraints, and can anticipate actions without continuous direction. Shared understanding is not about controlling decisions; it is what makes decentralization safer and more effective. This is foundational to mission command. It cannot be produced by dashboards or software alone. It must be cultivated deliberately by leaders.
AI adoption, therefore, requires leaders to be engaged in the integration and utilization of it. Without clear guidance on how AI-generated insights inform decisions, organizations could layer advanced tools onto unresolved organizational friction. Leaders must teach their teams the importance of maintaining shared understanding when collaborating with AI tools. AI shared decision frameworks can help with the process. For example, asking a generative or agentic AI system, “What is the best course of action?” bypasses critical steps of framing, identifying assumptions, assessing risk, and aligning with commander’s intent. Without disciplined, step-by-step decision processes, AI can unintentionally shape the problem in the wrong direction, undermining shared understanding across the force.
Consider a scenario in which an adversary declares a three-day maritime exclusion zone near a strategically significant island chain in the Indo-Pacific following a political crisis. Commercial shipping diverts; allies request US support. The US president must respond, and the combatant commander must provide options on whether to contest the exclusion zone, signal presence without entry, or remain outside the zone while applying pressure in other domains. Before any analytical tool is employed, the decision must be framed clearly: What political objectives are at stake? What risks of escalation are acceptable? What messages are intended for allies, adversaries, and regional populations? If this framing step is truncated by overreliance on AI-generated analysis, the process of developing shared understanding across the joint force is degraded before courses of action are even considered.
This initial degradation has cascading effects. An AI system may generate technically sound courses of action that are poorly interpreted by joint partners because the underlying strategic problem was not framed collectively. In such cases, recommendations may implicitly reflect the assumptions, priorities, or operational logic of a single service or functional community, rather than a genuinely joint perspective. Moreover, AI systems can inadvertently reinforce bias, assumptions, and service-specific perspectives because their training data may lack the framing, diversity of viewpoints, and operational depth required to support the decision. The result is not faster or better decision-making, but increased friction, reduced coherence across the joint force, and a higher risk that AI-enabled planning amplifies, rather than mitigates, existing seams in interservice integration.
Organizations often assume shared understanding will emerge naturally as information flow improves. In practice, faster flow can expose deeper interpretive fractures. Different services optimize for different metrics under different doctrinal constraints. When AI accelerates analysis without reconciling these differences, it can produce unrealistic courses of action. Leaders must seek speed alongside shared understanding. They should also use it to surface, reconcile, and standardize how the organizations they lead understand the problem before acting. This requires intentional design: shared definitions, agreed assumptions, explicit tradeoffs, and clear boundaries for decentralized execution. In a joint, all-domain construct, speed with shared understanding enables coherent actions across air, land, sea, cyber, and space. This improves the ability to manage decentralized operations in a dynamic environment and create multiple dilemmas for an adversary.
What Can Help Leaders Integrate AI Tools
Leaders integrating AI into command-and-control and decision processes should ask three questions:
- What assumptions does this system make visible? AI tools are powerful mirrors. Use them to expose where teams disagree about reality, constraints, and risk—not just to produce faster outputs.
- Where does interpretation diverge across the force and between partners? Identify recurring friction in terminology, metrics, authorities, and decision rights. Resolve these deliberately rather than adding more data or automation.
- What decisions can be decentralized safely once understanding is shared? Shared understanding enables disciplined initiative. Without it, decentralization increases operational risk.
AI’s strategic value lies not in automating decisions, but in enabling leaders to align interpretation at scale. When alignment exists, organizations can increase tempo without sacrificing coherence.
In volatile, high-speed environments, advantage does not come solely from processing information faster than an adversary. It also comes from enabling units to act independently yet coherently without waiting for constant direction. Artificial intelligence can support this objective, but only if leaders treat shared understanding as a key component, not a byproduct. Without that clarity, AI can become a force multiplier for confusion rather than a source of combat advantage. The organizations that succeed will not be those with the fastest AI tools, but those whose leaders understand that machine speed demands shared understanding to preserve unity of effort and operational coherence.
Richard L. Farnell is a US Army officer with operational command experience and strategic-level service, including an assignment in the Pentagon and executive support to senior leaders during crisis. His research and writing focus on strategy, leadership, and the responsible integration of artificial intelligence into plans and operations.
The views expressed are those of the author and do not reflect the official position of the United States Military Academy, Department of the Army, or Department of Defense.
Image credit: Staff Sgt. Zachery Jockel, US Army National Guard

