Author’s note: This is the fourth in a series of articles about the profession of arms. Over the series, I will chart the modern development of our profession in the nineteenth and twentieth centuries, examining that development through the lens of four themes that have driven and influenced it: events, technology, ideas, and institutions. I will then examine how change in the strategic environment will drive continued evolution in the profession of arms. Importantly, I will propose areas where we, as members of this profession, must lead change and ensure our military institutions remain effective—at every level—into the twenty-first century.
You can also read the first, second, and third articles in the series.
In the wake of World War II, the profession of arms had to respond to a new era of security challenges. This included drastic reductions in the size of military forces and new missions such as the police action in Korea. At the same time, major developments were taking place in technology. Nuclear weapons, space surveillance systems, and missile technologies leapt ahead. In the 1970s and ‘80s, precision and stealth technologies evolved. In the 1990s, computers became more widely available, more powerful, and cheaper to acquire. Finally, the internet became widely accessible to businesses and the public. These technological breakthroughs, along with the geopolitical imperatives of the Cold War, drove ongoing transformation in the profession of arms.
To understand the magnitude of change in the profession of arms after World War II, we must conduct an exploration of new technologies. These technologies underpinned significant evolution in how military institutions, and the profession of arms, thought about and conducted military operations.
New Technologies for a New Era
In the post–World War II era, the military played an important role in driving technology development. This was different from previous revolutions. In the First and Second Industrial Revolutions, military developments lagged the development of technology and changes in society. In the post–World War II era however, the military was the impetus for significant technological innovations. Where the military was not driving technological change, it was certainly moving in lockstep with it. The strategic competition between the Soviet Union and the United States provided important context for a technological competition. To that end, I propose that there are five technologies that were important for the profession of arms that emerged during this period: nuclear weapons, missile technologies, precision munitions, computing and the internet, and stealth technologies.
Dawn of the Nuclear Era. The dawn of the nuclear age straddled the end of the first twentieth-century pulse of professionalism and the start of the second. The development of the atomic bomb was perhaps to be the most important change in weapons technology, with the most far-reaching consequences, of any examined in this series. In his book on the Manhattan Project, Now It Can Be Told, US Army Lieutenant General Leslie Groves describes the driving factors for this new type of bomb: “to provide our armed forces with a weapon that could end the war, and to do it before our enemies could use it against us.” After the first test on July 16, 1945, and their use against the Japanese cities of Nagasaki and Hiroshima, the potential of this new weapon became clear. Groves later wrote that “a nuclear war could never be fought on this earth without bringing disaster to all mankind. This had been immediately evident to everyone who witnessed the Trinity test.”
In the immediate postwar era, the United States possessed a monopoly on these weapons. Despite this, nuclear weapons posed a double-edged sword to military planners. They were, as Tom Mahnken describes, “Janus faced, offering both opportunity and challenges.” While they offered the United States a chance to offset Soviet conventional superiority, their destructive potential posed significant political and ethical challenges. The Soviet acquisition of this technology in 1949 only complicated matters for military and political leaders.
Between the 1945 Trinity test and the Soviet development of the atomic bomb, the United States undertook a buildup in its stockpile of these weapons. A year after Trinity, the United States possessed just nine atomic weapons. Three years later, as the Soviets tested their first weapon, United States held approximated 170. Thereafter the buildup increased pace on both sides. A decade after the Soviets’ first test, the United States held 12,298 warheads and the Soviets over one thousand. By this time, the UK had joined the nuclear club and owned just over seventy atomic weapons. The high point in US holdings was the mid-1960s, when it held over thirty-one thousand weapons. The Soviets went even further; in 1986 they had over forty-one thousand nuclear warheads in their inventory.
From the 1980s, there was a downward trend in US and Soviet holdings of nuclear weapons. From an apogee of over sixty-four thousand warheads in 1986, the count had dropped to twenty-three thousand by 2000 and continued falling thereafter. However, over this time, the number of countries that owned these weapons grew. By the end of the twentieth century, eight different nations possessed atomic weapons.
Thomas Schelling described the challenges of atomic weapons in his 1966 classic, Arms and Influence. He describes how there “is a difference between nuclear weapons and bayonets. It is not in the number of people they can eventually kill but in the speed with which it can be done, in the centralization of decision, in the divorce of the war from political processes, and in computerized programs that threaten to take the war out of human hands once it begins.”
It is estimated that over 125,000 nuclear warheads have been constructed since 1945. They have influenced the development of the modern profession of arms through their destructive potential but also in the need for new ways of thinking about war and deterring conflict. As Schelling wrote, these new weapons changed how we needed to think about war, because
nuclear weapons make it possible to do monstrous violence to the enemy without first achieving victory. . . . Instead of destroying enemy forces as a prelude to imposing one’s will on the enemy nation, one would have to destroy the nation as a means or a prelude to destroying the enemy forces. . . . What is new is plain “kill”—the idea that major war might be just a contest in the killing of countries, or not even a contest but just two parallel exercises in devastation.
This was a discontinuity in theories about war and would drive change in how militaries organized for, and conducted, strategic surveillance (including new space-based systems), how they communicated and designed continental air defense, how they raised their forces to balance conventional and nuclear forces, how they developed strategy, and how military leaders interacted with civilian leaders. British General Sir John Hackett has written that it would also change how the profession of arms developed its people. As Hackett notes,
What has the introduction of nuclear weapons done to the profession of arms? I would say its most significant effect is to emphasise the importance of bringing in the best people. The greater the danger, and the more urgently it threatens, the higher the quality of person required in the profession, and the greater the need for confidence between the soldier and society he serves.
Of all the technologies examined in this series, the atomic bomb is perhaps the most novel, is certainly the most destructive, and has had the most far-reaching consequences. As Lawrence Freedman has written in The Evolution of Nuclear Strategy, “The atom bomb was not simply ‘just another weapon.’ It was still for use in war . . . but with implications and ramifications far beyond those which had ever accompanied the introduction of a new piece of military equipment.” We will further examine some of these implications, including civil military relations and strategy development, in the next article in this series. But before that, we must examine the other technologies that shaped the modern profession of arms in the second half of the twentieth century.
The Missile Age. At the end of World War II, the Soviet Union and the United States were the beneficiaries of German rocket technology and scientists. Large numbers of captured German V1 and V2 rocket systems (assembled and in pieces) were transported to both countries for testing and technological exploitation. From the 1950s, the Soviets and the Americans explored various types of rocket-launch programs. Both realized after the war that long-range rocket systems would need to be part of their offensive arsenals. The Soviets were the first to launch a long-range missile that could be described as an intercontinental ballistic missile (ICBM). With no long-range bomber force to counter American B-36, B-47, and B-52 bombers, the Soviets launched their first ICBM—which NATO gave the reporting name SS-6 Sapwood—missile in August 1957.
Throughout the 1960s and ‘70s, the Soviets and the Americans made steady progress improving the range, accuracy, launching time, warhead load, and survivability of their missile systems. During this period, both nations introduced missiles with multiple reentry vehicles. Successive generations of ICBMs employed guidance systems that exploited the developments in integrated circuits and computing systems that, as we have previously seen, continued to improve.
But land-based missiles were not the only use of these weapons. The Soviet Union and the United States both commenced exploration of submarine-launched missile systems in the 1940s. The United States deployed its first submarine-launched missile—the Regulus—in 1954. The Soviets tested their first system the following year and deployed their first effective submarine-based nuclear missile (NATO reporting name SS-N-1 Scrubber) in 1959. Subsequent generations of submarine-launched ballistic missiles, or SLBMs, steadily improved in range and accuracy. The attraction of a more survivable nuclear deterrent capability, which these SLBMs provided, has underpinned investment by the US Navy, its Soviet (and now Russian) counterpart, as well as the British, French, Indian, and Chinese navies in the decades since.
Finally, air-launched missiles were developed that could be used for both antiaircraft and ground and ship attack missions. Beginning with ground attack rockets in the later stages of World War II, air- and ground-launched missiles eventually became the predominant form of antiaircraft weapon in military inventories. Examples of this included the dense air defense environment in North Vietnam experienced by American aviators, and the development of an integrated air defense system developed by the Soviet Union during the Cold War. The developments of such environments changed the tactics of air forces in attacking land targets and combating enemy aircraft. It also drove military institutions to rethink their approaches to a range of missions that were conducted by air forces.
Missile technology shaped the profession in several ways. The first, and most obvious, is that new missiles added new long-range offensive capabilities that military institutions needed to absorb into their doctrine and build new occupational specialties around. While there was a range of different conventional explosive warheads on these air-, sea-, and land-launched weapons, the combination of nuclear warheads with missiles represents the most significant manifestation of this technology. Missile technology also changed tactics and the training of military personnel. Aerial combat changed with the introduction of air-to-air missiles and changed again with ground-based air defenses employing missile technology. It had similar impacts on the ground and naval services, resulting in new military occupational specialties and new functions that required new types of training and education for military personnel.
Finally, missile technology would underpin humans’ leap beyond earth’s atmosphere into space, and the developing space race of the 1950s, ‘60s, and ‘70s between the Soviets and the Americans. Not only was this a race for prestige and the ability to be first on the moon, the dawn of the space age, founded on these missiles developed in the 1940s and ‘50s, presaged space-based surveillance systems, communications, and precision navigation—and the birth of precision warfare—which would shape the profession of arms in the coming decades.
Precision Warfare. After World War II, military organizations sought to improve the accuracy of their weapon systems. The Germans, with their early developments in antiship missiles during the war, led the way. The US Air Force also learned some radar-based techniques for night interdiction, which they then applied in Korea. But it was the Vietnam War that would supercharge the development of what we would now recognize as precision weapons.
A significant catalyst was the efforts by the US Air Force and US Navy to destroy the Thanh Hoa Bridge, an important strategic target. The first attacks on the bridge were launched in April 1965 and failed. Against the background of these failed missions, the US Air Force began experimenting with laser terminals bolted onto conventional bombs to improve their accuracy at low cost. Finally, in 1972, US Air Force F-4 Phantom aircraft attacked the bridge with laser-guided bombs. A subsequent mission by US Navy aircraft destroyed the central pier with two improved two-thousand-pound Mk5 Walleye II television-guided bombs.
Precision weapons began to be deployed more widely in the 1970s and ‘80s. The combination of more accurate weapons with more integrated sensor and intelligence networks continued throughout this period. Some at this time even foresaw new precision weapons making nuclear weapons obsolescent. In 1988, a commission chaired by Fred Iklé and Albert Wohlstetter submitted its final report wrote in 1988, “Extended range, accurate smart conventional weapons can make a major contribution to halting Soviet attacks anywhere on the perimeter of the USSR. . . . The precision associated with the new technologies will enable us to use conventional weapons for many of the missions assigned to nuclear weapons.”
The 1991 Gulf War showcased the effectiveness of precision weapons. During the air campaign in the lead-up to the ground war, and during the ground war, over seventeen thousand precision weapons were dropped. They were used to destroy radars and installations, as well as to directly attack armored vehicles protected by sand berms (known as “tank plinking”). However, sand and smoke often obscured the laser guidance mechanisms; not all precision strikes were successful. And despite the large number of these weapons used, they still only comprised 8 percent of air-to-ground weapons used during the war.
In the wake of the Gulf War, military institutions as well as strategic thinkers began to appreciate how precision might transform warfare. As Eliot Cohen noted in a 1996 Foreign Affairs article: “A military cliché has it that what can be seen on the modern battlefield can be hit, and what can be hit will be destroyed. . . The introduction of long-range precision weapons, delivered by plane or missile . . . means that sophisticated armies can inflict unprecedented levels of destruction.”
Precision weapons, while possessing a multi-decade development path, have had an increasing impact on the military profession. The precision of bombs and other munitions has reduced the amount of weapons that are required to destroy targets. While this is a positive development for military commanders and logisticians, it has required development of evolved approaches to targeting (and the training of targeteers) as well as better assessment of effects after a strike.
The availability of precision munitions from the 1980s onwards also drove debates about the application of precision and the balance between firepower and maneuver. As precision increased, it was expected that lethality on the battlefield would also increase. But as lethality increased, technological progress in killing was countered by intellectual responses—greater dispersion of forces, greater emphasis on camouflage, and increased emphasis on electronic warfare to degrade communications and command and control. As Trevor Dupuy explores in The Evolution of Weapons and Warfare, increases in precision and lethality drive organizations to change their methods of fighting to exploit the new weapons. These increases also motivate military organizations to develop active and passive means of reducing the impact of such weapons when used by an adversary.
The proliferation of precision weapons and their associated technologies has also had the effect of reducing their unit price, making them more affordable. And it has seen precision guidance now being available in munitions such as mortars, which would have been previously either unaffordable or technologically impossible. This has benefitted mid-size and even small military organizations. Unfortunately, it has also been to the benefit of nonstate actors, requiring military institutions to develop tactics and techniques to protect themselves against precision weapons in a range of circumstances beyond those that would involve conflict with other military organizations.
Well-known strategist and retired US Army Major General Bob Scales proposed in the 1990s that we were witnessing the birth of “precision age warfare.” Writing in the Future Warfare Anthology, he described how precision weapon systems would limit the capacity for enemy forces to mass effectively and reduce the burden on logisticians. He also noted that “inevitably a creative opponent will develop a method of war that will attempt to defeat our preoccupation with precision firepower.” This has now come to pass. Scales reexamined this recently, writing that “we are witnessing a figurative ‘return to Gettysburg.’ This fifth cycle now underway will likely make the offensive costlier and more difficult.”
Computing and the Internet. Based on the nineteenth-century work of Charles Babbage and Ada Lovelace, there were a series of inventions in the twentieth century that led to what we now understand as computers. In 1936, Alan Turing presented the idea of a universal machine (later called a Turing machine) that would compute anything that was computable. The 1944 construction of the Electronic Numerical Integrator and Calculator (ENIAC) at the University of Pennsylvania is considered the birth of digital computers.
From the late 1940s through to the ‘60s, additional discoveries such as the transistor in 1947, the first computer language (COBOL) in 1953, and the integrated circuit in 1958 all led to the development of more capable computers. A 1964 computer developed by Douglas Engelbart contained both a mouse and graphical user interface. This marked the transition of computers from their heretofore purely specialist, becoming more accessible to the general public.
By the 1970s, research and development in many aspects of computer science had expanded significantly. In 1970, Intel revealed its first dynamic memory chip called the Intel 1103. Installed in a Hewlett Packard computer, this was Intel’s breakout product and established it as a leading innovator in computer semiconductor technology. In 1971 floppy disks were invented by a team at IBM. Between 1974 and 1977, multiple personal computers hit the consumer market, including the IBM 5100 and the Commodore PET. These established a new market in consumer goods and underpinned a massive expansion in software development by amateur computer users. This included two self-described computer geeks named Paul Allen and Bill Gates, who in April 1975 formed Microsoft.
Computing power increased at a steady rate from the 1970s onwards. This was driven by continuous growth in the number of transistors that could be designed to fit on a single microchip. From the late 1970s, the number of transistors on integrated-circuit computer chips doubled annually. In 1972, a single microchip could hold between one thousand and five thousand transistors. By the year 2000, this had increased to almost fifty million transistors. This continuous improvements in computing power increased memory capacity in computers, improved the performance of sensors, increased computing speed, and a reduced the cost of components.
Mirroring the development of computer technology from the late 1960s was the development of communications technology. This expansion in connectivity is most often associated with the birth and spread of the internet. An engineering marvel of increasing breadth and complexity, the internet now is accessed by over 4.9 billion people globally. The initial steps in building this highly reliable communications web occurred when the US Department of Defense contracted BBN Technologies to build the first computer routers. These routers would be incorporated into a new project being undertaken by the department’s Advanced Research Projects Agency to construct a reliable and distributed communications network that it was calling the ARPANET.
A key enabling innovation for the ARPANET was breaking up data into small packets, labeling each packet with its final address, allowing them to find their own path to their destination according to congestion, and then reassembling all the relevant packets at their end point. Another ARPANET first was the first email sent in 1972. Email spawned an entirely new way of communicating and networking. But the ARPANET would not fully realize its potential until Tim Berners-Lee, a telecommunications engineer, proposed the combination of his hypertext idea with the Transmission Control Protocol and the computer domain name system. He then built the first elements of the World Wide Web as well as the first web browser in 1991. This laid the groundwork for an explosion in internet use in the following decades. Within six years, over one million internet websites were live. By 2020, it was estimated that there are over 1.7 billion websites (and counting).
The internet provided a variety of tools that enhanced the capacity of humans to share information, pictures, commentary, and indeed, their very lives. These developments in computing and the internet also produced advances in military technologies, such as vastly improved battlefield connectivity and awareness; precision weapons; dense intelligence, surveillance, and reconnaissance networks; and more streamlined and connected personnel management and military logistics. It also resulted in the development of, experimentation with, and implementation of new warfighting concepts based on joint operations, knowledge warfare, and information sharing such as AirLand Battle, shock and awe, effects-based operations, and network-centric warfare.
Stealth. The desire to minimize the detectability of military forces is an ancient one. After all, military commanders have used the night (and terrain) to hide their movements and achieve surprise since the dawn of human warfare. However, the story of stealth as a technology has more recent origins. In the 1950s, British researchers at the Royal Aircraft Establishment explored what they called “radar echoing area” in order to reduce the radar profile of planned new bomber aircraft. While the bomber project was eventually canceled, research continued to develop a stealthy nuclear warhead for new ballistic missiles that could penetrate Soviet anti–ballistic missile defenses. At some point (the exact date remains shrouded in mystery), the results were shared with US researchers. This resulted in the Mk12 conical reentry vehicle for Minuteman missiles in 1970.
The 1950s through to the ‘80s saw the emergence of better sensors and improved integration of radars, air control systems, interceptor aircraft, electronic warfare, and missiles in the air defense environment. In particular, the Soviet Union invested in widespread air defense systems. The Soviet air defense systems featured sophisticated networks that were difficult for Western aircraft to penetrate. A 1965 CIA assessment of Soviet air defenses listed five thousand radars deployed at over one thousand sites in the Soviet Union, and three thousand interceptor aircraft. The Soviets continued their investment in their air defense regime into the 1970s, and it became increasingly clear to the United States and its allies just how difficult any penetration of their airspace during a conflict would be.
The solution was a combination of advances in computing and materials technology. First, new mathematical models run on state-of-the-art computers were able to model the radar signature of an aircraft. Second, the Lockheed Corporation in the United States discovered that if they made aircraft with faceted surfaces, radar energy could be reflected away from receivers. A highly classified program was established and Lockheed was commissioned to build a prototype. The first prototype aircraft—now known as the F-117A stealth fighter, flew in 1977. Production of the F-117A commenced in 1981. Shortly afterwards, Northrop, another American defense contractor, was issued a contract by the US Air Force to build a stealthy, penetrating bomber that could carry more weapons than the F-117A. Given the designation B-2, the first of these new stealth bombers flew in 1989.
While stealth aircraft are not entirely invisible from all aspects and require complex maintenance and mission planning to successfully evade enemy air defenses, they have proved to be outstanding for narrow mission sets such as breaking open an enemy air defense system in the opening hours of conflict. They were used in this fashion in the 1991 Gulf War, the 1999 NATO bombing campaign in the former Yugoslavia, and again at the beginning of the 2003 invasion of Iraq and in 2011 missions over Libya.
The development of stealth technologies was not only restricted to the air battle. Over the course of the Cold War, the United States and the Soviet Union sought to perfect quiet submarines and reduce the radar signatures of their surface ships. For submarines, the arrival of nuclear propulsion reduced the time they needed to spend on the surface, making them more difficult to find. Designing “quietness” into submarines also enhanced their stealth capability, and the United States held a decided advantage in this field for several decades. As John Benedict notes, however, this advantage was reduced with the introduction of the Soviet Akula and Sierra classes, which had similar sound levels to their US equivalents.
Stealth technology in the air and under the water has changed the tactics of operating in those environments. This has impacted warfighting concepts in our profession (particularly “first day of the war” approaches) and has flowed into the training and education of the military personnel who operate these platforms—and those who must counter them.
Another impact has been to drive potential adversaries to develop asymmetric counters to stealth technologies. Stealth is an extraordinarily expensive capability. Few nations can afford to invest in stealth capabilities such as F-117A or F-22 aircraft. For state actors, different technologies such as bistatic radars, passive detection systems, sophisticated infrared detectors, as well as better camouflage and deception systems are lower-cost ways to counter stealthy systems. For nonstate actors, the concept of hiding among the people is another tactic. Each challenges our profession to produce new or different ideas to sustain the effectiveness of stealthy platforms.
Another challenge for our profession is that the application of stealth aircraft has often been segregated from more conventional fighter and bomber aircraft. This had the impact of separating doctrinal development between “black” capabilities and more openly acknowledged programs. Writing in 2016, Dave Deptula and Mike Benitez note that this is a challenge that must be overcome to optimize a more balanced use of stealth and nonstealth platforms. In particular, they write that “stealth, like most capabilities, is most effective when integrated with a robust, diverse, and complementing set of capabilities.”
The Impact of New Technologies on the Profession
New technologies in the second half of the twentieth century had a deep impact on the profession of arms—how it considered strategy, designed its organizations, and trained or educated its people. In combination, these technologies led to profound changes for the modern profession.
Nuclear weapons were the most significant discontinuity. In his 1948 book, Crusade in Europe, General Dwight Eisenhower wrote that “in an instant many of the old concepts of war were swept away.” And so they were. Nuclear weapons significantly increased the destructive power of military institutions. New ideas and institutions were required for the use and deterrence of these new weapons; this will be explored in the next part of this series. Another outcome of this technology was the development of nuclear power, which had both civil and military applications. Not only did nuclear reactors find their way onto ships and submarines, but they necessitated entirely new training and education programs for those who would use and maintain them.
Missile technology, when combined with nuclear weapons, revolutionized the reach and destructive potential of military forces. As Schelling describes, “Nuclear weapons can change the speed of events, the control of events, the sequence of events, the relation of victor to vanquished, and the relation of homeland to fighting front.” But missiles had other more conventional and relatively less destructive uses. They changed naval and aerial warfare with different ways of attacking targets—and defending them. And they placed into orbit around the earth the foundations for another significant technology—spaced-based precision navigation and timing.
Finally, the birth of the digital age heralded a more connected and aware profession. Computers, the internet, and the ability to network a multitude of military platforms, organizations, and headquarters reshaped military strategy, the operational art, and tactics in every domain. Indeed, this technological development resulted in what military institutions now see as a separate warfighting domain, with its attendant doctrinal, training, education, and operational opportunities and challenges for the profession of arms.
Technology has always fascinated humans. From the first stone tools that were repurposed as weapons by our distant ancestors, human beings have sought to apply existing technologies and develop new ones to provide advantage in war. And so it was with the period between 1945 and 2000. Driven in large part by the strategic competition between the United States and the Soviet Union, new and exotic technologies came into existence that changed war and strategy. But this impact was compounded by other factors. Andrew Marshall has written that “the most important competition is not the technological competition. . . . The most important goal is to be . . . the best in the intellectual task of finding the most appropriate innovations in concepts of operation and making organizational changes to fully exploit the technologies.” New technologies had to be accompanied by new ideas and new institutions to maximize their impact and to maximize the advantage they would generate for a military organization. It is this notion of new ideas and new institutions that we will explore in the next part of this series.
Maj. Gen. Mick Ryan is an Australian Army officer. A graduate of Johns Hopkins University School of Advanced International Studies and the USMC Command and Staff College and School of Advanced Warfare, he is a passionate advocate of professional education and lifelong learning. He has commanded at platoon, squadron, regiment, task force, and brigade level, and is a science fiction fan, a cricket tragic, a terrible gardener, and an aspiring writer. In January 2018, he assumed command of the Australian Defence College in Canberra, Australia. He is an adjunct scholar at the Modern War Institute, and tweets under the handle @WarInTheFuture.
The views expressed are those of the author and do not reflect the official position of the United States Military Academy, Department of the Army, or Department of Defense.
Image credit: Don DeBold (adapted by MWI)