The Russian invasion of Ukraine has dramatically shifted the way the US Army envisions the employment of unmanned systems. Unmanned aircraft systems—drones—reportedly account for two-thirds of Russia’s daily losses, surpassing all other weapon systems. First-person-view drones, small aircraft that require a human operator to essentially drive an explosive-laden system into a target, have become popular as cheap and effective weapons. As a direct result, countries around the world are scrambling to develop systems to both defeat them and adapt their arsenals to integrate them.
To do so, jamming has been employed against the small systems most prevalent on the battlefield. This involves disrupting the electronic control link between the drone and its human operator. Severing this link denies the direct connection needed for the operator to guide the system along its terminal path to an intended target, rendering it ineffective.
The prominence of jamming on the battlefield exposes a fundamental problem—the vulnerability of the link between a drone and its human operator. In all but a few systems, human operators are required for flight control, terminal guidance, payload and camera operations, and navigation. To mitigate this, autonomous systems are fielded in smaller numbers. However, these systems still require human input in the form of a rigid mission plan, and do not have methods to modify their tasks in the face of a changing mission without explicit human input.
This is the challenge, then, for military forces like the US Army that seek to leverage the capabilities of unmanned platforms while reducing their vulnerability to jamming: creating a method to provide intent to autonomous systems. The Army already has a model for this—the doctrinal concept of commander’s intent. This is the method by which a commander gives subordinates a required end state and key tasks, and it becomes especially important when original mission parameters become untenable or communication with a higher headquarters is lost. Providing these machines with a version of commander’s intent, coupled with AI systems to parse and create machine-readable tasks, offers a method to overcome vulnerabilities to jamming with minimal human oversight. While this idea does not necessarily remove the human operator from lethal kill chains, it does allow for more flexible and redundant drone employment in the hostile environments of the future.
A New Type of War
At the beginning of the war in Ukraine in 2022, drones were a part of ground combat in limited numbers. By 2024, both Ukraine and Russia were fielding drones capable of a myriad of tasks, both lethal and nonlethal. The production of hundreds of thousands of these systems has become a top priority for both countries, with Ukraine setting up domestic mobile production shops that specialize in the efficient manufacture of the smallest versions of these systems. As traditional weapons such as manned aircraft and armored vehicles have been attrited or have become too vulnerable to be tenable, these systems have replaced them as the most lethal weapon of this conflict.
As each side responded by jamming adversary drones, an ever-evolving game of cat and mouse emerged, leading to the introduction of so-called tethered drones. These systems create a physical connection between the system and its operator and reduce the possibility of electronic attack on the control link. As evidence of the use of these systems, recent pictures have emerged of treetops in the contested areas of Ukraine littered with fiber-optic cables from active or previous drone use. More recently, Ukraine has reportedly fielded semiautonomous drones capable of lethal strike to defeat Russian jamming.
Commander’s Intent: A Model for Mitigating Control Link and Human Vulnerabilities
Both the Ukrainian Armed Forces and Western militaries have identified that rigid orders and inflexible tasks create unnecessary casualties on modern battlefields. In lieu of a direct connection to a higher command, soldiers must be able to create new tasks at lower echelons to achieve the overarching original goal of the higher headquarters. In the US Army and others worldwide, the overall intention for the end of an operation is communicated as the commander’s intent.
In the US Army, commander’s intent “provides a unifying idea that allows decentralized execution within an overarching framework.” To repurpose this concept for unmanned systems, one subcomponent of commander’s intent is the most important: key tasks. Key tasks are “significant activities the force must perform” to achieve a desired end state, the terminal condition of a given mission. Key tasks can include effects on the enemy force or required friendly conditions. These discrete, well-defined tasks are the most easily translated into the tangible machine inputs we desire. Other elements of intent are also relevant. Some types of the commander’s critical information requirements, for example—like the location of particular enemy systems or the location of friendly assets—can be preprogrammed before launch.
Aerial Reconnaissance: A Case Study
Key tasks can take many forms, some of which are not easily translated into discrete tasks for a machine. However, as a case study, we examine aerial reconnaissance, a common drone mission set, to explore the possibilities of using this information. Consider a fictional scenario in which a ground force is tasked to search for and destroy key enemy systems, identified in earlier planning and listed in order of importance (e.g., a high-payoff target list). These systems have been assumed to be around the unit’s battlespace, identified in specific areas called target areas of interest (TAIs). The unit has been assigned the tasks of finding these systems and producing fire missions for subordinate artillery or other fires platforms.
The ground unit intends to use its assigned small, aerial drone platforms, each equipped with cameras and potentially lethal payloads. The unit assumes a position near the expected location of the targets and launches the drones. However, due to the impact of jamming and the lack of a physical tether, these systems are quickly isolated from their human operators. The operators can no longer give flight control commands or operate the drone’s attached camera or weapon systems. Luckily, the drones are equipped with a method to respond to such a condition. They can adhere to the key tasks for the mission because they have been preprogrammed to revert to these tasks if their human counterparts are no longer available.
The drones move to their preprogrammed TAIs in sequential order. Each is equipped with modern aided target acquisition systems, using AI techniques like advanced object detection to scan incoming video frames for a particular set of objects, like the enemy systems identified on the high-payoff target list. If the enemy has also degraded their ability to locate themselves using GPS, as is often the case in Ukraine, these systems can do so using onboard inertial movement sensors and reverse mapping of their environment (e.g., simultaneous localization and mapping, or SLAM, a common technique for localization of mixed reality devices). This is roughly similar to the way the US military teaches land navigation.
If, at these TAIs, the drone’s aided target acquisition system identifies an object or position that is classified by the same name as one or more of the items on its high-payoff target list, the system can produce a military grid coordinate based on its estimated location (assuming a GPS-denied environment) and a rough trigonometric transformation. All of this, however, is useless if the target information cannot be translated into a fire mission. The drone now requires a connection to its human counterpart to complete its assigned mission. Using the same inertial navigation from before, the drone can move toward the initial area where jamming severed its control link, and along the vector toward its operator, until such a link is reestablished and the target location can be transmitted. The drone is then retasked with the same intent or a new version as needed.
Technical Challenges
The underlying technical capabilities needed to adopt commander’s intent for drones exist, but are currently unavailable in a single military unmanned system. Ukraine has fielded a lethal, autonomous version already. While its full capability is unknown, open-source information about it shows that a weaponized, drone-borne object-detection capability is already a reality. Encoding written intent as machine-readable tasks is possible as well. This is necessary to extract the key tasks and end state that may be especially useful in the commander’s intent. Existing research already has shown that natural language processing can extract the semantics of written intent and translate this into its component elements and tasks. Beyond this basic and general research, little work exists that is focused on a military context. Finally, localization and navigation from known points without the aid of GPS satellites is an active field of research, but no research exists to leverage this technique for object detection and aided target acquisition. This is required for autonomous navigation in the GPS-denied environments that exist today and will likely exist in the future. Synchronizing this existing work with an eye toward military use cases is still necessary to fully implement intent in these machines.
Ethical Implications
Our fictitious drone from the story above was not required to use its lethal payload to destroy the target it located. Had such a need arisen, existing policies would govern the use of force by autonomous systems (or, in our case, a semiautonomous system operating in an autonomous mode). In the United States, current policy dictates that these systems are designed to allow their operators to “exercise appropriate levels of human judgment over the use of force.” Using lethal force without explicit human approval is still a matter of debate and concern worldwide, and it raises questions about the uncertainty of identification, especially in civilian-populated areas. However, nonlethal drones, or lethal drones undertaking an exclusively nonlethal mission, are less of a problem. In our example, unless that drone’s key task was to destroy the identified target on the high-payoff target list, direct and lethal force was never required. Indirectly, however, the system relayed coordinates, likely resulting in lethal action taken by other units.
While ethical discussions surrounding this selective autonomy are likely to involve slippery slope arguments, countries like Ukraine and Russia have already begun developing and fielding autonomous systems. If more countries decide to follow suit, the question of how to guide and safeguard autonomy will remain. Specific, close-ended tasks, like those already given as part of the commander’s intent, are an excellent option to mitigate these concerns and keep humans a part of the kill chain.
While technically feasible, current unmanned military systems are generally not designed to operate without direct human connection in dynamic environments. As more unmanned platforms with increasingly autonomous capabilities are developed, the challenge shifts from a technological one to one of employment techniques. Commander’s intent, a well-known and tested method of issuing guidance in uncertain and communications-restricted situations, is a promising avenue on which to pursue a solution to this employment challenge. By predeploying key tasks and end state in the form of discrete instructions, as well as the augmentation of existing drones with autonomous navigation and aided target acquisition systems, US Army units will be best positioned to leverage drones’ unique capabilities to accomplish missions in the face of future jamming environments.
Lieutenant Colonel Matthew Corbett, PhD, is a military intelligence officer and research team leader at the Army Cyber Institute at West Point, NY. He has extensive experience in tactical intelligence operations, targeting, and human-machine integration.
The views expressed are those of the author and do not reflect the official position of the United States Military Academy, Department of the Army, or Department of Defense.
Image credit: Spc. Jennifer Posy, US Army
