US adversaries are weaponizing artificial intelligence to unleash a new wave of psychological warfare. Russia, through its troll factories and bot farms, has adopted a new AI-driven asymmetric warfare strategy, using generative models to amplify disinformation efforts on an unprecedented scale. A striking example was the AI-generated image of a false Pentagon explosion, which caused a rapid and dramatic (albeit temporary) drop in the US stock market. This incident highlights the catastrophic potential of AI-driven propaganda to destabilize critical systems, making it imperative for the United States to adapt. While the Department of Defense’s AI Adoption Strategy is a step forward, gaps remain in training US forces to fully harness AI for information warfare and to counter these evolving threats, particularly those from Russia and China.
Russia is using AI to enhance its disinformation campaigns, particularly through the evolution of bot accounts that now produce more human-like and persuasive content. Ahead of the coming November US presidential election, Russian actors have sought to leverage AI to enhance the scope and scalability of their influence operation efforts, some of which specifically aim to shape public opinion toward candidates, sway US electoral outcomes, undermine public confidence, and sow discord both within the United States and globally. The integration of AI has allowed Russia to monitor the information environment in real time, enabling rapid adaptation of disinformation tactics.
China’s use of AI in psychological warfare has become a key element of its strategy to shape regional and global narratives, amplifying its influence across the world. By leveraging AI to create deepfakes, automate social media bots, and tailor disinformation to specific audiences, China has enhanced its capacity to manipulate public discourse. This strategy extends beyond mere online influence; China’s AI capabilities enable large-scale cyber-enabled operations, as seen in coordinated disinformation campaigns targeting Western audiences. China’s “cognitive domain operations” merge AI with psychological and cyber warfare, aiming to deter US intervention in future conflicts or polarize American society, presenting an ever-growing challenge to global stability.
The Dangers of Doing Nothing
Failing to act against adversarial AI-enhanced information warfare poses significant risks. Russia’s and China’s ability to leverage AI to amplify their propaganda and disinformation campaigns threaten to undermine US and allied efforts across the globe. If unchallenged, this technological edge could enhance adversarial aims to destabilize regions, influence elections, and manipulate public opinion with unprecedented effectiveness. The cost of inaction is high, potentially leading to a strategic imbalance favoring adversaries that are adept at exploiting AI for malign purposes. Imagine a scenario where Russian and Chinese AI-driven disinformation campaigns go unchecked. The flood of false narratives could have devastating effects by eroding public trust in democratic institutions, creating confusion and division. In such an environment, the United States’ ability to project influence and reinforce stability in regions across the globe could be severely diminished. The stakes are high, and the need for a proactive response is urgent.
Enhancing Training: Adapting to AI Integration
The training focus for US military personnel, particularly psychological operations soldiers, needs to adapt to the evolving technological landscape. Soldiers must be educated on the current AI tools available and understand how these tools can then assist in analyzing the operational environment, speeding up analysis, generating content, and addressing risk concerns for commanders. Increasing AI literacy is the first step. Soldiers should understand the basics of AI, its capabilities, and limitations. This foundational knowledge is crucial for effectively integrating AI tools into operations. Training programs should include hands-on experience with AI tools, allowing soldiers to practice using these technologies in realistic scenarios. Education on the ethical implications of AI use in military operations is also essential to ensure compliance with legal and moral standards. Given the rapid pace of AI development, training programs must emphasize continuous learning and adaptation to keep pace with new advancements.
AI-Enhanced Psychological Operations
Integrating AI tools into military operations, particularly in the realm of information warfare, offers several key advantages that can enhance US military capabilities and enable psychological operations soldiers to counteract adversarial information campaigns. These advantages include enhanced analysis, speed and efficiency, scalability, and risk mitigation. AI can analyze vast amounts of data from various sources to identify trends, sentiment, and potential threats. This capability allows psychological operations detachments and teams to gain a deeper understanding of the operational environment and develop more precision-based messaging efforts. Additionally, AI can generate content quickly and efficiently, which is vital in today’s increasingly fast-paced information environment.
Moreover, a major issue that psychological operations teams face is the expansion of their efforts across the entirety of their deployed areas of responsibilities. Army psychological operations capabilities are already in high demand by geographic combatant commands, theater special operations commands, and Department of State embassy country teams. Given the high demand from multiple organizations, detachments are required to break into smaller subunits to cover extensive geographic areas and critical missions, leading to substantial challenges in managing bandwidth and scalability. AI can help overcome these bandwidth and scalability issues by streamlining content production and distribution, allowing smaller teams to support wider mission objectives, cover more ground, and engage with multiple audiences without sacrificing speed or quality. This scalability is essential for countering widespread disinformation campaigns effectively when timing is usually a crucial factor in messaging effectiveness.
Risk management is also significantly enhanced by AI. AI can assess the potential impact of different messaging strategies, helping commanders to understand the risks and benefits associated with various courses of action. By simulating potential outcomes, AI can provide a clearer picture of the operational environment and the likely responses from adversaries and other audiences. Moreover, AI’s risk mitigation capabilities enable teams to derive actionable insights and recommendations, streamlining planning processes to better support their commands. This is predicated on the AI implementors being able to communicate the integration process to their commanders and policymakers.
Content generation is another critical area where AI can be beneficial, as it can be used to efficiently create authentic and realistic material rapidly enough to maximize impact. AI can generate high-quality content at scale, which is crucial for countering adversarial narratives and disseminating US messaging, enabling rapid responses to adversarial propaganda. Tools like natural language processing can create persuasive and contextually relevant content that resonates with target audiences. The speed at which it can do so is crucial in the fast-paced information environment, where timely interventions can make a significant difference.
Audience response testing is another area where AI can be invaluable, primarily due to its speed and efficiency. Instead of relying solely on traditional methods, AI can simulate expected audience reactions and engagement metrics based on preloaded audience characteristics, allowing teams to refine messaging strategies before wider dissemination. While this approach may not replace the nuanced, experience-based insights of a psychological operations detachment, it significantly accelerates the process, enabling multiple iterations of a message to be tested and optimized more quickly than manual methods allow, thus improving the likelihood of effective engagement.
Training Proposal: Developing a Period of Instruction
To effectively integrate AI tools into military operations, a comprehensive training program is essential. This program should include essential instruction blocks covering the fundamentals of AI, basic knowledge of AI literacy, how AI and large language models works, various capabilities AI can provide, and, crucially, its limitations and concerns about its use. This foundational knowledge is critical for understanding how AI can be applied in military contexts. Hands-on training should be a significant component of the program. Practical exercises that allow soldiers to use AI tools in simulated scenarios will help them become familiar with the technology and understand how it can be applied in real-world operations. This hands-on approach ensures that soldiers are not just theoretically knowledgeable but practically skilled in using AI tools. Ethical and legal considerations should also be a key part of the training; soldiers must be aware of the potential risks and ensure that their use of AI complies with any strategy documents or policy updates that dictate ethical standards of AI usage.
Continuous learning is essential given the rapid pace of AI development. Ongoing education and training are crucial to ensure that soldiers remain proficient in using AI. This could include regular updates on new AI tools and technologies, as well as refresher courses to keep soldiers informed about the latest developments in AI. Specialized training for psychological operations personnel is also necessary, given their role in challenging adversarial narratives in the information environment. Focused training on how AI can enhance previously discussed specific tasks—information analysis, content generation, and audience engagement—will equip psychological operations teams with the skills they need to effectively integrate AI into their operations. The quality of this specialized training will be greatly enhanced if it can include real-world examples and case studies to illustrate the successful practical application of AI, as well as lessons learned from implementation and experimentation struggles.
Policy Updates for End-User Implementation
One of the critical solutions to countering adversarial AI advantage is updating US military policies to provide clear boundaries for the use of AI tools. Training on AI is fundamental, but its impact will only be maximized if the right policy framework is in place. Current policies often lack the specificity needed to guide military personnel in the ethical and effective use of AI technologies. By establishing comprehensive guidelines, the US military can empower its members to utilize AI in ways that support US goals and objectives while maintaining adherence to ethical standards. These policy updates should focus on defining acceptable uses of AI in various military operations, establishing protocols for the deployment and oversight of AI tools, and providing a framework for continuous evaluation and adaptation of AI policies as the technology evolves. Clear guidelines will not only enhance operational effectiveness but also ensure that AI use is responsible and ethical.
The risks of ignoring the AI-driven psychological warfare tactics employed by Russia and China are not just theoretical—they are already unfolding. As the operational environment continues to evolve, adversaries will continue to exploit AI to destabilize democratic systems, manipulate public opinion, and undermine US influence on the global stage. The cost of inaction is severe, as AI accelerates the scale and sophistication of disinformation campaigns in ways we are only beginning to grasp. Failure to address these tactics could lead to a strategic imbalance that weakens the United States, leaving us vulnerable to further erosion of trust in our institutions and a diminished ability to project influence and reinforce stability across the globe.
The US military cannot afford to lag behind in this critical dimension of the information environment. To preserve our national security, we must adapt now. This requires not just policy updates, but a comprehensive approach that includes advanced training, strategic AI integration, and rapid deployment of AI-enhanced operations. By embracing AI as an active component of our psychological warfare capabilities, we can outpace our adversaries, address the capacity and bandwidth issues psychological operations forces face across the globe, and be better prepared to safeguard the information environment from adversarial malign influence. This is not a future challenge—it is a present-day battle, and the stakes could not be higher.
Major Austin Coombs has fifteen years of experience in the United States Army, five of which have been as a psychological operations officer in the European theater. He is currently tasked with completing a study sponsored by the Army’s 5th Battalion, Special Warfare Training Group on how to use generative artificial intelligence to increase the efficiency and effectiveness of information operation professionals, staffs, and allied and partner forces. His next assignment will be as a company commander within 6th Psychological Operations Battalion.
The views expressed are those of the author and do not reflect the official position of the United States Military Academy, Department of the Army, or Department of Defense.
Image credit: Esercito Italiano, via Wikimedia Commons