Editor’s note: The Army Mad Scientist writing contest—Operational Environment in 2035—contributed to the Army’s understanding of a critical transition to a multi-domain force. Maj. Michael B. Kim’s winning submission, published here, describes the role of artificial intelligence in learning and concept development.
Move thirty-seven perplexed the world. Even the highest ranking Go players were confused by the move and some even considered it a mistake. The move was played on March 2016, during game two of the highly anticipated match between AlphaGo, a program designed by Google’s DeepMind, and eighteen-time Go world champion Lee Sedol. Like chess, many Go games follow familiar lines before the game opens up into all of its complexities. Go, an ancient Chinese game played on a board composed of nineteen by nineteen squares and with black and white stones, is an incredibly complex game that many consider is most effectively played through creativity and intuition. In chess, after the first two moves, there are four hundred possible next moves; in Go, after the first two moves, there are close to 130,000. The complexity of the game was not lost on the artificial-intelligence community—unlike the computer Deep Blue that beat Garry Kasparov in chess, AlphaGo displayed the power of neural networks and exposed the world to new possibilities. Although the unorthodox move thirty-seven befuddled the community in real time, it was later lauded for its creativity, innovation, and beauty, as it ultimately framed the board for AlphaGo to win game two.
Here lies the incredible potential of artificial intelligence: the power of AI is not in applications where information is quickly filtered and human decisions are confirmed, but rather in its ability to set conditions and allow the program to innovate, create, and conceptualize new strategies and possibilities without our biased and experience-based projections. The importance is not in the inputs (as it is with most previous machine-learning programs, which used a Monte Carlo simulation framework) but setting parameters to allow an environment that produces free play, innovation, creativity, and new conceptualizations. The Army should approach AI not as a supercomputer, but to explore the unknown—to create something new. Even if the experiment fails to create a holistic new concept for warfare, the process may yield several innovative approaches never considered before by Army leaders. The hope is to discover “move thirty-seven.” To this end, this paper serves to explore opportunities in leveraging neural networks, frame the problem faced by Army leaders, and recommend a new analytical framework for concept development.
There are two important concepts to understand when it comes to neural networks. First, neural pathways are strengthened each time they are used, a concept fundamentally essential to the ways in which humans learn. Second, neural networks can be layered—with each layer possessing different problem solving strategies. For AlphaGo, the program took a description of the Go board as an input and processed it through two neural networks: policy and value. The policy network, consisting of thirteen layers, selects the next move AlphaGo will play. Its layers are a combination of supervised learning (Monte Carlo simulation results based on inputs from previous human games played) and self-learning that allowed the program to play itself millions of times developing new moves based on certain positions. The value network predicts the winner of the game through the evaluation of board positions. Move thirty-seven was the result of these two networks, as the program calculated a single move with the goal of winning by a single stone. The most significant aspect, however, was the ability of the program to self-play and self-learn.
In a 2017 article titled “Multi-Domain Battle: AirLand Battle, Once More, With Feeling,” defense analyst Shmuel Shmuel provides a persuasive argument that if the operational environment “has substantially changed since the 80s and ‘will demand entirely new ways of warfighting,’ then trying to replay the ‘Battle of Fulda Gap’ [with new gear, acronyms, and the addition of cyberspace] might turn out to be disastrous.” Gen. David Perkins, then the leading proponent of Multi-Domain Battle (which has since evolved into Multi-Domain Operations) as the commanding general of the Army’s Training and Doctrine Command, stated that MBD draws on “time-tested principles of combined arms and the conceptual foundation of AirLand Battle.” Therefore, “Multi-Domain Battle is not unprecedented; rather, it proposes to combine capabilities in more innovative ways to overcome the challenges posed by adversaries.” Here then lies the problem for the Army community: to develop a future Army operating concept based on an unknown future battlefield against a future adversary that we hope will fight in a way that accommodates the enduring elements of AirLand Battle and Multi-Domain Operations. Are Army leaders absolutely certain that our current operating concept is sufficient to face this problem set?
History tells us that concept development is a time- and resource-intensive endeavor that requires extraordinary organizational energy to develop. AirLand Battle was predicated on a notion that “the common denominator for all future warfare,” even in another world war, was the “unprecedented potential for destruction and an increased tempo of events.” Army leaders and thinkers developed this concept in a post-Vietnam era, when they observed profound changes within the US Army. There are similar parallels to today as our Army transitions from over a decade of counterinsurgency operations and transitions to a decisive-action environment against a near-peer competitor. Based on lessons learned in recent conflicts (e.g., Israel in Lebanon in 2006 and Gaza in 2014, Russian intervention in Crimea and backing of separatists in eastern Ukraine since 2014, etc), Army leaders have decided that the common denominator for all future warfare will be against a near-peer threat (or at least that it’s better to err in this direction). With the introduction of new capabilities proliferating on the battlefield (e.g., drones, advanced communication systems, electronic warfare) and in new domains, how does the Army deploy these capabilities into the battlespace? Do the principles of AirLand Battle and Multi-Domain Operations hold fast?
It is difficult to create a new concept for warfare as subjectivity and cognitive bias prevent us from seeing warfighting objectively. As domains expand and technologies arise a need for modernization and large-scale maneuvers to test all aspects of the Army’s warfighting capabilities is required. This is not unprecedented—after Nazi Germany invaded Poland in 1939, the US Army held a series of major exercises now known as the Louisiana Maneuvers (with less remembered maneuvers taking place in other states, including Arkansas and the Carolinas, as well). Approximately four hundred thousand troops were divided into opposing armies with a total of nineteen divisions. The war games were executed over thirty-four hundred square miles. An important result of these war games was the creation of sixteen US armored divisions during World War II. However, there were also errors in the assessment of the Louisiana Maneuvers, as can be clearly seen with the creation of tank-destroyer battalions, which were disbanded immediately after the war due to their lack of use. The war games took considerable resources and organizational energy—which the US Army would be challenged to apply today. However, the significance of their execution was vast and shaped the doctrine and force structure of the Army for decades to come. AI and neural networks provide an opportunity to run the “Louisiana Maneuvers” not just once but millions of times.
A New Analytical Framework for Concept Development
Army planners fixate on the inputs particularly the nature of future adversaries and environments. In developing a concept for future warfare, the following inputs (at a minimum) are considered:
- Past and current US Army combat experiences
- Past and current global combat experiences
- Previous doctrines, other doctrines
- Integration of current and future technologies
- Current and future adversaries
- Current and future operational environments
The hope is that the inputs are correct enough (or good enough) to enable us to execute the next war. Our collective thinking about future warfare is characterized, however, by conflicting aphorisms. Is it “one thing we know for certain is that the next war will be different from the last war,” or “history doesn’t repeat itself but it rhymes”? Army leaders are confronted with a dilemma. Some call it a “wicked problem” (this term is frequently misapplied in an Army context)—a problem that fundamentally changes with new input variables, thereby making the application of a solution impossible. Army planners are unsure whether to rely on or fight against biases and historical precedence. For example, if a series of wars are independent variables (like flipping a coin), why then can’t the next war be similar to the last war? What if our mindset to err on the side of caution and prepare for the worst-case scenario (decisive action against a near-peer competitor) is wrong? What if lessons learned from recent wars (the findings of the Winograd Commission regarding the Israel Defense Forces’ experience in Lebanon are especially poignant) turn out to lead us in the wrong direction? Is it better to prepare for the worst-case scenario instead of the lowest common denominator? Is it better to prepare for high-intensity conflict because it is easier to “scope down” to low-intensity conflicts? Predicting the character of future war is difficult, and we will most likely get it wrong. However, AI allows for a paradigm shift: Army planners and concept developers no longer need to fixate on inputs but can fundamentally reorient their focus to outputs.
In shifting the focus to outputs rather than inputs, the Army can leverage the power of AI and neural networks to test multiple input scenarios. Instead of looking for commonalities in adversarial trends, future technologies, and operational environments, we look for solutions that transcend the inputs and are effective regardless of the future adversary or environment. In other words, the advantage of AI and neural networks is the time- and cost-saving ability to run war games in numerous scenarios against any adversary. The inputs can constantly be changed, and millions of simulations run, to find commonalities in outputs. Currently, we can only guess that our current doctrine and force structure is applicable to both megacities and open desert decisive-action scenarios. AI war gaming can show what similarities exist in the optimal employment of capabilities in both operational environments. We focus on the outputs rather than the inputs. Imagine war games shifting in real time, with AI running millions of iterations based on new variables. You introduce a new enemy antitank capability based on Israel’s experience in the Gaza Strip—the AI runs millions of iterations to see if the new variable fundamentally shifts warfighting applications or force structure. You can run multiple scenarios and have AI, through neural networks, apply Army capabilities to find the best combinations and formations in different scenarios.
AI can be applied to numerous parameters and architectures. From megacity scenarios to the next world war, using AI to simulate battles is faster, more cost effective, and most importantly, objective. The AI can fight itself in millions of battles through a wide range of scenarios and develop ways to integrate current and future capabilities. Perhaps a new construct or holistic concept is not developed—however, the program could develop new applications of capabilities that could significantly influence the trajectory of modernization and concept development. The beauty is in the unknown—and perhaps we will discover a move thirty-seven, a way in which to apply our current and future technologies that will give us an advantage against our future adversaries that is unexpected and unforeseen.
Maj. Michael B. Kim currently serves as the Director of the Joint Pacific Multinational Readiness Capability for US Army Pacific. He will next serve as an Army Inter-Agency Fellow with the Office of Management and Budget. His prior positions include service as the Brigade S3 for the 196th IN BDE and Squadron S3/XO for 8-1 CAV, 2-2 SBCT. He commissioned as an officer upon graduation from the United States Military Academy in 2005, and holds a Masters of Military Arts and Science Degree from the Command and General Staff College and a Masters in Systems Engineering Degree from Cornell University.
The views expressed are those of the author and do not reflect the official position of the United States Military Academy, Department of the Army, or Department of Defense.
Image credit: Spc. Jonathan Wallace, US Army