What’s the future of warfare? If you’re a science fiction fan (or just like thinking crazy thoughts) you’d say it’s robots, lasers, and hoverboards transponding through space and time to deliver lethal effects on behalf of a government. Or is it? There are often two schools of thought. One side looks at the possibilities based on current technologies and ways of war extrapolated to logical conclusions. The other side usually addresses the current and projected inhibitors to change, adhering more to what is probable than what is possible.
As can be expected of two classically trained military officers, even of different nations, the authors acknowledge and do not disagree with the Clausewitzian thought that the enduring nature of war is a clash of wills between two of more groups of politically motivated human beings. What the authors address in this essay is the ever-shifting character of war—the changing tools and techniques employed on the battlefield to achieve success in that clash of wills.
Perceptions of Future Warfare: The Realm of the Possible
In the history of war and warfare itself, “Warfare bears the characteristic, even defining, stamp of violence.” This can take the form of the threat of physical destruction, or its actual use. Granted, this statement borders on being too deterministic, but the complexities of war mean that sometimes generalizing the character of war could facilitate a better understanding of its evolutions.
What is the next threshold?
With that caveat in mind, war essentially started with tribes killing each other with spears. Then cities were built and humans found novel ways to kill their enemies with swords and arrows. What followed were states that tried to kill their enemies with aid of guns and explosives. Then the industrial revolution hit. What resulted? War technologies became a business. This business had tanks, planes, and missiles all entering the battlefield. It was at this point that human civilization hit a threshold with the advent of nuclear weapons. A weapon, which arguably still to this date requires resources to create, maintain, and employ that only a state can bring to bear. Now what? What is the new threshold in our current post-industrial age, the so-called fourth Industrial Revolution, where we see the convergence of physical, digital, and biological technologies? Some argue that information is the weapon and cyber technologies will define the next war. If this is the case then this threshold arguably can be accessed by state and nonstate actors alike, including multinational private companies. It is this new threshold that will aid us in describing the possibilities for the future of conflict.
The future operating environment
Before we dive into what this conflict might look like, we need to understand what the future operating environment might look like. Both of the authors have occupied recent positions in their armies that included the opportunity to see what many militaries consider as the future of conflict. In the case of the Australian Army, the Australian Army Future Land Warfare Report 2014 defined six clear characteristics of the future operating environment:
1. Crowded — Rapid population growth and aging global population will strain resources, especially within the Pacific region. Megacities will challenge security forces and the balance of using human vs. machines in the workplace will challenge social norms.
2. Connected — Militaries will have to plan for backup digital technologies as you cannot be reliant on digital networks at all times. Access to rapid prototyping allows for technological enhancements to occur anywhere quickly.
3. Lethal — Nonstate actors will have access to lethal technologies previously used only by state forces. Due to the size and scale of conflicts, dispersed headquarters and decentralized logistics will reduce cyber exposure. Dual-use technologies will challenge detection and identification.
4. Collective — Militaries will need to work with partners and other services as engagement and relationships are just as important as the force itself.
5. Constrained — Budgetary pressures will continue and the public will continue wanting to be informed on military matters.
6. Convergence — The convergence of technologies will challenge forces, and the ability to work as a joint force integrating all facets of national power will be key to success.
These trends, while not in the same words, are echoed in concept documents for most Western armies, including the US Army’s assessment of the changing character of warfare and the US intelligence community’s global trends document.
When assessing the United States, United Kingdom, Canada, and New Zealand future land concepts, a common thread started emerging—information. The volume, scale, accessibility, contestability, and weaponization of information are all factors in the future of conflict. Another theme also emerged, which is the lethality of future conflict. So, what is it that aids in making information a key factor and what increases the lethality in conflict, given that previous conflicts have both used information and been lethal?
Information
The sheer volume of information for a military commander to make decisions to target, influence, protect, and win a conflict will be both a blessing and a hindrance. Big data will allow for the commander to have the information required to make decisions—if the commander and his or her staff can analyze it. And it’s not just about the ability to analyze it, but also to be able to do so faster than the enemy. Therefore, decision-making technologies through artificial intelligence and quantum computing will help a commander decide during conflict, but it will be fast. It is this tempo that increases the likely lethality in conflict. These decision-making systems will also leverage autonomous and semi-autonomous systems. Most militaries agree that a human will need to be in a loop to make key decisions, but what if we had a human and machine decision-making team?
Imagine a scenario where a commander has been making decisions over a long period of time and the machine, which can read the human’s biological responses, recognizes this lack of sleep and, due to the human’s inability to think clearly, the overwrites the commander’s decisions? Who do we trust more, a tired human or a machine? Does this trust change if it involves a decision leading to the loss of life? Some argue that the human in the decision-making loop gives us compassion and ethics behind decisions (not to mention accountability). However, Microsoft has been able to teach AI bots rules such that the bots then learn and evolve around these rules. Could this be translated to ethics? Could a machine therefore be more reliable to make not only legal but ethical decisions in the future? Could machines have feelings, or at least think they have feelings?
Information can also be weaponized, as can be seen daily with fake news. Could machines determine what is fact and what is fake news? Disinformation is being used by the Russians in national politics and in their incursions into Ukraine, while tweets from world leaders can escalate tensions in tricky international situations. It is this connectivity and the accessibility of information to not just military forces in conflict, but anyone with a smart phone and access to the internet, which means information can be contested, skewed, and used for military advantage at a speed and scale and in ways state-based forces haven’t exploited. It is this information that can have both lethal and nonlethal consequences in future conflict.
Lethality
Weapons have continued to evolve to the point where they have the ability to target within only a few meters. At the moment, this precision is reliant upon knowing where your target is and ensuring your intelligence assets are tracking, identifying, and confirming the target. Humans weigh estimates of the collateral damage, weapons engineers tailor the weapon systems to deliver a desired effect, and only when legal and operational approval has been gained can the target be prosecuted. Conceivably, we could automate this process utilizing machines to analyze and strike a desired target. However, there are possibilities for even more precision than automation. For instance, we could employ nano-bio weapons; imagine the ability to pick the DNA of your target. A weapon could be created that could target that DNA strain. You could cloud entire cities with this weapon and not kill everyone, just your enemy. It doesn’t get more precise than this. However, what are the ethical issues and challenges with this? Will the Western world’s appetite for chemical and biological weapons change in future conflict as other forces challenge the rules-based global order in which we seek?
Precision is not the only technological effect that will increase lethality. The reach of weapon systems will also be a challenge. North Korea’s missile program proves that with limited access to technology, one can build missiles with intercontinental reach. Will this force superpowers to overturn past treaties ensuring the non-militarization of space and instead consider its weaponization, allowing for global reach at the push of a button? Or what if you didn’t need to actually put weapons in space, rather just the ability to bump someone’s satellite off course, affecting autonomous or semi-autonomous systems on earth?
It’s not just weapon systems that will provide reach. Technology-enabled logistics will give expeditionary forces reach in ways we haven’t previously fathomed. Automated logistic resupply through drones, missiles, or robots could keep combat forces resupplied for longer while also reducing vulnerable resupply chains. These systems can be linked into individual combat forces to only resupply when required or triggered by event, almost removing the human entirely from logistic chains.
Additionally, mass in the form of robots, drones, weapons, and information will make the battlespace cluttered and contested. When we build on these systems with autonomous weapons the lethality can be increased. However, these weapon systems can have a range of nonlethal and lethal effects, and can be precise or imprecise. When we talk about nonlethal effects we can imagine the use of the electromagnetic spectrum to establish gates or fences that disable any electronic systems once they pass through; or landmines that emit a pulse that disable all networked systems. Additionally, the use of laser-based weapon systems could blind either personnel or optic sighting/guidance systems or burn and destroy ordnance or materiel. Then there is the use of incapacitating biological and chemical agents that could render the human force ineffective, or have lethal effects.
Mass in the form of imprecise autonomous weapons could see deployable “boxes of missiles.” Essentially a box can be dropped in an area, be digitally camouflaged to its surroundings, and lay dormant until triggered. Once triggered, it can deploy multiple rockets or missiles onto a target providing mass of firepower to delay or destroy an enemy. Imagine these scattered across an area to defend borders, or to deter movement within an area of operations. This type of mass is also cheaper and therefore easier to produce at scale. Technology proliferation through automated design and additive printing will mean systems that were historically produced by specialized companies with industrial machines can be mass produced by using a photo, drawing, or captured weapon and—now here is the scary part—by anyone. This will make many military weapons globally accessible.
The lethality of a human is another aspect. Areas of human enhancement—both physical and cognitive—will give human soldiers an advantage unimaginable by current standards. When physical human enhancement is conceptualized, the development of exoskeletons to help soldiers carry loads is commonly envisaged. Soldiers will be able to travel further and carry more which gives forces access to areas that are environmentally difficult for air or land systems to reach, potentially making them able to reach those areas undetected or slip under the detection threshold. There’s also scope for human enhancement through cyborg technology. Could we implant x-ray vision or even enhance regular vision, thereby eliminating one physical discriminator for military service? Or could you replace a biological brain with a computer to help with decision making for soldiers in heavily analytical roles within the force. Could soldiers have limbs replaced with electro-enhanced limbs, able to provide them the strength while ensuring they will not get injuries? Now ethically does the solider have a right to choose what enhancements they have, or does the job specify what they need for their period of service? Then there is access to pharmaceuticals to give soldiers mental and physical advantages. These all raise many ethical and legal issues in what we currently believe is acceptable to impose on an individual; for example we currently accept mandatory vaccinations or prophylaxis to be administered. In the future of war, could the ethical issues of enhancements be overturned due to the overwhelming advantage it would give a military force to be able to fight and win?
While all these possibilities of a future war are exciting—in the way a science fiction movie pulls in a crowd and convinces them that we’ll have hover tanks in no time—they are also terrifying. Do we really think that as humans we have evolved enough to push what we have heretofore considered ethically impossible to the forefront of war technologies? History has proved that while we can dream of possibilities like robots taking the brunt of conflict, the probable outcome of war is not as futuristic as we may dream.
Reality of Future Warfare: The Realm of the Probable
There is no doubt that, as discussed above, there are powerful technologies—and ingenious applications of them in the physical and informational space—that portend changes to the future of conflict. However, there are inherent trends in the security space that will inhibit anticipated, and even desired, elements of the future, particularly when it comes to conflict. Three trends in particular bear understanding: first, the future will never be as “futuristic” as we dream, yet will be dramatically different than today; second, warfare is evolutionary; and third, warfare is constrained by its inputs.
In many ways, the future of conflict is most effectively provided to us through science fiction—whether of the written or the recorded variety. For those in the military, these include such stalwarts as Robert Heinlein’s Starship Troopers, Orson Scott Card’s Ender’s Game, Joe Haldeman’s Forever War, and more recently Pete Singer and August Cole’s Ghost Fleet. Each of these weaves together social, technological, and military trends that will influence the future of conflict. However, where such future projections and the realities of war meet, there’s usually a gap—the future is rarely as “futuristic” as we believe it could be. That said, particularly when it comes to technology, change is frequently quick, meaning that while our projections overshoot the mark, future conflict will be different than what we experience today.
A quick example illustrates this. One of the authors grew up enamoured with a series of books by David Drake, beginning with the novel Hammer’s Slammers, published in 1979. In this book, a mercenary tank regiment fights in various conflicts to support honorable causes and, of course, for money. What is interesting is how the tanks were described—they were hover tanks manned by two crewmen, assisted by a basic artificial intelligence system, and with a main gun that shoots a sort of plasma round that can travel hundreds of kilometers.
Now compare this 1979 projection of future armored technology to the main battle tank in use by the US Army that same year—the M-60 “Patton.” This tracked vehicle had a very basic ballistic computer that aided in firing solutions, but otherwise was completely mechanical and could accurately shoot to about 2,000 meters. Today, we have the M1A2 SEP Abrams main battle tank. Almost four decades after Drake imagined hover tanks, this is—and will remain for the foreseeable future—the heavy land asset for the United States . . . and Australia. It has a completely digital system, advanced targeting capability for all its weapons systems, and is capable of accurately hitting targets out to about 4,000 meters. So, this is not the leap ahead technology envisioned by Drake thirty-nine years ago. However, it is far and above the technology available in Drake’s own time—so while we can anticipate significant changes that will shape the future of conflict, they will not be at the outer edges of our imagination.
Why is this? Because warfare is evolutionary. The application of technologies and concepts for military operations require thought, development, experimentation, and ultimately employment. First must come an understanding of what military force will be used for, and how. Then, a negotiation between extant technologies and those that are on the cusp of creation must be tested to see if they meet military requirements. Then, units must be provided the equipment for experimentation, creating an iterative approach to improvement, but also to test warfighting concepts that employ the technology to best effect—technology naturally changes faster than individual and organizational capacity to utilize those technologies. Finally, an equipping plan must provide the capability to the force—not to mention the education and training required for military personnel. So, it’s easy to imagine how a technology or idea might affect warfare; it’s much harder to get it into production and implementation.
A great example is the drone. Practically everyone has some sort of remotely controlled platform these days—even private individuals have relatively inexpensive drones they use to race and do tricks. You can see people use them to capture their antics on surfboards or take pictures of them rock climbing (among many other activities). And of course, our militaries continue to develop different ways to employ them. However, while we think of drones as a contemporary and futuristic technology, the first recorded use of an unmanned aerial vehicle to conduct an attack was the “Austrian balloon” in 1849, which was essentially a bomblet attached to a balloon that was released over a target via an attached copper wire and a battery. Of course, there are light years of technological development between the Austrian balloon and today’s remotely piloted aircraft. But the point here is that such technology evolves over time, and more importantly, the innovations that marry technology to military effect are dictated by more than simply a new technology or innovative idea.
And that brings us to our final point. Future conflict is constrained by its inputs. Conflict is inherently a human endeavor—if you’re disposed to Thucydides that means it is driven by fear, honor, or interest. If you’re more partial to Clausewitz, war is an instrument of politics to achieve a desired policy. In both cases, the choices made by policymakers and the tools available at the time of a conflict are entirely driven by the authorities, funding, time, and innovative minds already extant in the system. “Future conflict” is executed through contemporary means.
The current inputs to the system that will drive the character of conflict in the near term include:
- Underinvestment in military forces overall;
- Overinvestment in political rhetoric;
- Reduction in the size of militaries, along with specialization (and increased expense) of military technology;
- Increased expectations of quick wars due to the lethality of the battlefield;
- And a decrease in societal support for war (while there is also a paradoxical increase in the ambivalence toward the use of military forces abroad).
Match these to the increasing trend in the use of area-denial and machine-learning/AI technologies and you have an overall strengthening of the strategic (and operational) defense against the offense. Warfare today and in the future (at least against near-peer or peer adversaries) will again be about geographic positioning, interior lines, and the manufacturing base. Modern precision munitions, niche technological solutions, and first-echelon forces will indeed result in lethality unseen on the battlefield in decades and projected in today’s news reports and defense papers. However, once all those expensive, manufacturing-intensive munitions and technologies have been destroyed or degraded—which will be measured in weeks, not years—we’re back to late twentieth-century warfare; think Korea 1953 or Desert Storm 1991, not Edge of Tomorrow or Star Wars Battlefront.
So, acknowledging the fact that the dynamics of future conflict will inhibit the employment and use of high technology beyond the opening weeks of a war, we must understand that, while we may dream big and have expectations of a future war that incorporates fast and lethal robots and automated weapon systems exploiting quantum technologies and processing information faster than we can currently fathom, the reality is that these technologies may not endure for the length of the conflict. This means that, in the end, war is still about humans and endurance—in terms of the physical, materiel, and national will. Even in an era where high technology continues to dramatically shape the character of warfare, our men and women—our military institutions—must maintain control of decision making and ably apply our professional ethics to essentially do what we have always done, maneuver and exploit our enemy’s weaknesses to achieve our nations’ interests.
Maj. Jasmin Diab is an officer in the Australian Army and Maj. Nathan K. Finney is an officer in the US Army. The views expressed in this article are those of the authors and do not reflect the official policy or position of any agency of either the Australian or US governments.