Generative AI is poised to reshape the landscape of military operations, particularly in the battle over truth and narrative that surrounds any modern conflict. By leveraging machine learning to produce AI-generated content, adversaries can weaponize synthetic media, making fact and fiction nearly indistinguishable. The death—or not—of combatant leaders is prime example of the magnitude of the challenge this emerging reality poses.

A leader’s removal from the battlespace—through targeted killing, capture, or even natural causes—does not guarantee the end of hostilities but can significantly influence the course of conflict. Among the various dimensions of the information environment, cognition is most deeply affected by symbolism and belief. A charismatic leader can galvanize a movement; his or her removal can cause belief to waver.

Decapitation strategies must prioritize leaders whose removal will create lasting disruption—those who are both critical to operations and difficult to replace. Effective targeting focuses on high-value targets and high-payoff targets whose absence materially degrades an adversary’s capabilities. The killing of Osama bin Laden, for example, was strategically more impactful than that of his successor, Ayman al-Zawahiri, due to bin Laden’s symbolic importance and irreplaceability.

Historical examples like the bin Laden raid highlight the importance of public perception in leadership targeting operations. A missile strike on his compound was ruled out due to the potential for denial and misinformation. Although photos of his body existed, they were withheld to avoid exploitation as propaganda. While cheap fakes circulated online, they were easily discredited—something that may not be possible in the future.

As generative AI technology improves, it could erode the effectiveness of leadership decapitation. Synthetic media may allow terrorist organizations to simulate the continued presence of deceased leaders, undermining public belief in their deaths—even in the face of hard evidence like DNA. For example, the Taliban denied the death of its leader from COVID-19 to preserve cohesion and morale. Symbolism plays a critical role for nonstate actors, and generative AI can be used to artificially preserve that symbolic presence.

The case of Anwar al-Awlaki illustrates how generative AI may complicate future confirmation of a target’s death. Killed in 2011, al-Awlaki’s identity was confirmed using facial recognition, but no DNA evidence was made public and the confirmation was simply made by the United States government. In a future saturated with synthetic content, such limited transparency could invite doubt and allow disinformation to take hold. Generative AI will likely raise the evidentiary threshold required to confirm battlefield deaths, making traditional methods of verification insufficient.

The implications of synthetic media manipulation are not limited to terrorist networks, extending beyond nonstate actors and into the realm of authoritarian regimes. In states such as North Korea, AI could be used to extend the perceived rule of a leader like Kim Jong Un beyond death, preserving regime stability. Proof of life holds strategic value—both domestically and internationally—in maintaining credibility and suppressing dissent.

Recent events demonstrate that even the perception that content has been generated by AI can destabilize governments and challenge the legitimacy of leadership. One real-world warning sign came from Gabon, where a video of President Ali Bongo—delivered after a health crisis—was met with widespread belief that it was produced by AI, triggering a coup attempt. Although forensic testing later confirmed the video’s authenticity, public perception alone nearly led to regime collapse. Imagine the consequences of such an event in a nuclear-armed state.

Events like this highlight the growing need for military institutions to adapt to a rapidly evolving information landscape. The implications of generative AI introduce new urgency for the US military and intelligence community. They must prepare for an information environment where visual proof cannot be trusted. False narratives fueled by synthetic media can go viral, undermining US operations before a coherent response can be mounted. AI-enabled deception could neutralize the strategic impact of a successful strike.

In an age where seeing is no longer believing, adversaries will exploit generative AI to craft persistent falsehoods that shape global narratives. While tools like blockchain may aid in detection today, future advancements in AI may render even these safeguards obsolete. Intelligence agencies may be forced to declassify sensitive information to maintain credibility—but even that may be insufficient without dedicated information operations personnel to identify and correct flawed narratives.

Generative AI is not just a tactical threat; it is a strategic disruptor that challenges the foundations of belief, perception, and reality in modern warfare.

Lieutenant Colonel Matthew J. Fecteau is an information operations officer and a PhD researcher at King’s College London studying how generative AI will impact combat zones.

The views expressed are those of the author and do not reflect the official position of the United States Military Academy, Department of the Army, or Department of Defense.

Image credit: Hamid Mir (adapted by MWI)