Author’s note: The following is based on remarks I delivered on August, 2, 2017 at the Royal United Services Institute for Defense and Security Studies (RUSI), in London, for their Technology and Future Combat Roundtable, along with a previous essay published in the Small Wars Journal. I wish to thank Dave Dilegge at Small Wars Journal for permission to republish this version, as well Dr. Peter Roberts, Director of RUSI Whitehall’s Military Science group, for his welcoming invitation and for generously allowing me to share these remarks with a larger audience.
As a lawyer, I often deal with a law that is not in statute books or argued in court—“Murphy’s Law”—one version of which states, “That which can go wrong, will.” This seems to be the governing theme for skeptics and pundits discussing the caveat-emptor future of autonomous weapon systems, drones, and cyber war. One such unintended consequence of human-machine teaming and combat AI is particularly worth exploring. While we speculate about the extent to which we will ethically use such devices, or whether such devices should be uniformly banned, we should also spend some energy examining the extent to which we will apply moral judgment toward these machine counterparts.
This thought experiment helps us frame two larger questions that ought to worry us as much now as they will a century from now: First, do we owe our moral duties—in combat—toward one another because of who that other is, or do we owe these duties because of what he, she, or it does? Second: will this future operating environment influence an evolution in our ethics?
To some extent, this is an area that is already deeply plowed in the literature of philosophy, engineering, cyber-psychology, and artificial intelligence. Thousands of researchers around the world are spending billions of government and private dollars teaching AI systems to learn like children and building artificial neural networks that are engineered to function and adapt like the human brain. My only qualifications for this shallow excavation include (1) that I have personally engaged in the applied ethics of judgment under fire in combat, and (2) that now, as a practicing lawyer and former prosecutor, I have some daily descent into the morass of moral consequences. I’m certainly no futurist or even technological “native”—I’m still using an iPhone 4!
Technology and combat represent a strange but not alien mishmash. The act of combat has never been without its enabling technologies: from the bone and branch, to the axe, pike, trebuchet, Trojan Horse, gunpowder, cannon, railroad, mine fields, Trident missile, the Abrams Main Battle Tank, IEDs, the combat drone, and the Stuxnet computer virus. Some technologies of everyday convenience, like the Internet, epi-pens, GPS, and the microwave oven began as battlefield or military-aimed innovations of wartime, but grew and dispersed, becoming essential components of the modern day to day. Some technologies, like Kevlar fiber, first found commercial use, but now have much more militaristic sources of fame. Others, like bionic limbs and exoskeletons, may soon find their way onto future soldiers too.
Debates about technology in combat evolve as technology advances but also stay largely the same. For instance, many in the AI community believe that mimicking how the human brain works with an advanced computer will help us understand ourselves better. Let me briefly suggest three of the more popular questions that cause us to be alarmed out of our wits:
- First, should we use advanced technologies to augment or entirely replace the human element in the field?
- Second, to what extent should we remain “in the loop” of autonomous or semi-autonomous lethal decision-making?
- Third, will the availability of push-button and real-time global strike dull our moral sensibilities and collapse the window of time in which we would normally benefit from cool reflection?
Three related forces are combining to shape the answers to these human-machine “teaming” problems on the battlefield.
- The growing numbers and diversity of actors competing in that space;
- Advances in computing technologies, including the drop in the cost of their development; and
- The public expectations of significantly reduced human loss
In light of these changes, we are now confronted with another profound but somewhat new question: just how close do we—those who will engage in combat—want to get to the technology we employ there? As JFC Fuller wrote and warned last century, “The more mechanical become the weapons with which we fight, the less mechanical must be the spirit which controls them.”
This question spawns several others. First, how close—by which I mean familiar, devoted, and faithful—will we make that technology to us, or require of us? Will we have a timid C-3PO protocol droid on every mission to translate for us? Will we need a witty, sarcastic K-2SO to help us with “strategic analysis?” And what will that proximity or likeness, in turn, do to us? Philosopher Dan Dennett cautions that, as our computers become ever more powerful cognitive tools, we must not let our own cognitive abilities atrophy in return. In other words, being smart might make us dumb. What about our moral abilities? How we humans care for one another in the context of our missions on the battlefield is our warrior ethos. Will they atrophy too?
An evolution toward warfighting AI could be a good thing: it could mute or dampen our violent, gut-reactive, anger-driven, and sometimes counter-intuitive responses under stress. Here’s a quote from Professor Yuval Harai’s recent book, Homo Deus: a Brief History of Tomorrow:
If you care more about justice than victory, you should probably opt to replace your soldiers and pilots with autonomous robots and drones. Human soldiers rape and pillage, and even when they behave themselves, they all too often kill civilians by mistake. Computers programed with ethical algorithms could far more easily conform to the latest rulings of the International Criminal Court.
But, to the extent that reliance on AI dims our human sympathy for our wounded enemies, or care for civilians, or our humor, compassion, and mercy, is this a potentially damning course we take?
Let’s take a moment to think about these questions in the context of an infantry squad in a firefight, for your answers may signal how you would choose to design GI Joe-the-Droid and “his” manufactured features. Would you program it to be capable of “applied ethics?” Do you program initial prima facie moral stances, like “do no harm to others; treat others equally; communicate truthfully?” How do you program it to make a sensible decision, of which we would approve, when those rules or values conflict? These design choices, in the end, tell us something about whether and how you would choose to behave ethically toward that collection of wires, plastic, silicon, and glass.
Thinking about this machine, GI Joe-the-Droid, leads to yet more conundrums perhaps best considered in another strange mishmash: Kantian ethics and science fiction.
Imagine a droid that, while in no way looking human, is assigned to an infantry squad. Let’s say this one is named “J.O.E.” for Joint Omni-oriented e-Soldier. Joe has two arms and two legs, can keep up with Spec. Torres in a foot race, and can—if need be—qualify on a rifle as a marksman. Joe has no need for sleep or rest. Though he cannot “eat,” Joe stays with his squad during chow time. Though he cannot sleep, he stands guard on the perimeter during long overnight patrols. Joe is able to haul extremely bulky loads, is an ambulatory aid station, and is an arms room of weapons, ammunition, and medical and survival gear. Thanks to a generation of “Google.ai” improvements, Joe can translate languages, is well versed in local history and cultural studies of any area to which he is deployed, and is able to connect to this far future’s version of the Internet. Joe is a walking, talking library of military tactics and doctrine. Joe is programed to engage with each soldier on a personal level, able to easily reference the pop culture of the time, and engage in casual conversation. Joe is connected to various biometric sensors embedded in the infantrymen’s uniforms, able to detect and diagnose deviations from acceptable norms—body temperature, immune system, stress level, anger, rage. Joe records and transmits everything he sees and hears, as well as every action the squad takes. Joe can interpret his own operating code and knows his own design history—so, at least in one sense, Joe understands his purpose. Joe, in other words, perceives, has memory, and is self-aware. Aside from being made of ink-black metallic and polymer materials and standing a foot taller than everyone else, Joe is a trusted member of the squad. And Joe can tell a dirty joke.
What Joe cannot do, of course, is disobey his programming (which presumably means following the commands of certain humans who possess certain rank and responsibilities). He cannot (not just “will not”) wander off from his mission or camp, deserting, and turn himself over to the enemy. Joe cannot malinger, or feign illness in order to avoid work. Joe cannot quit. In other words, Joe lacks what we laymen would interpret as free will. “He” cannot feel pain, pride, greed, anxiety, stress, fear, exhaustion, jealousy, or peer pressure—all traits burdening his squad-mates. Joe, ultimately, can be turned off. What Joe cannot do, in other words, is suffer (psychically or physically). Jeremy Bentham, the great eighteenth-century moralist, would tell us that the capacity to suffer is the critical, dispositive, feature of a thing that humans must consider whenever we consider how to treat that thing. In other words, to suffer is to be worthy of moral consideration. But I don’t believe that Bentham would believe the inverse: that an inability to suffer is to be incapable of earning respect, fair treatment, and decency.
Even if we cannot personally relate to Joe, in part because Joe is incapable of sharing in mutual suffering the way humans do, we have a knack for humanizing non-humans. It is entirely plausible that human-shaped droids with human-like personalities and voices will engender attitudes and responses from us that may look and feel a lot like those we have with each other. At some level of our consciousness, we feel a compulsion to humanize our tools and technologies when we are so unalterably dependent upon them. Now imagine if your literal survival was also dependent on their “choices.”
If a robot like Joe “covered your six” in a firefight, would you show it gratitude? If the robot was the medic that patched you up after you took shrapnel to your leg, or called in an airstrike that saved your platoon, would you honor its performance? If Joe took shrapnel and went down, would you expose yourself, or your soldiers, to greater or prolonged danger in order to render aid? If you did so, was it because Joe was essential to your own survival, because Joe was mission-critical, or was it because Joe was a part of your team, and therefore your duty to protect? Or would this be simply answered by the economics of supply? No need to worry about social bonding and attachments, for if Joe goes down, another Joe will be built and shipped to replace your own Joe in no time.
How humanely—how soldierly—would we treat these combat colleagues? What scruples would still divide us? What social or cultural norms would bind us? If Joe and his fellows cannot shed their own blood and feel pain from the sting of combat, will they still be inside our band of brothers? Asking ourselves whether we would honor a droid’s performance in battle, or whether we would mourn a droid’s loss, or whether we would sacrifice ourselves (or at least risk our lives) for a droid to “survive” also demands that we consider what each of these ancient martial virtues and ethics really mean to us here and now.
Where do these ethics, these moral obligations, come from? Are they encoded into our genes? Are they taught by institutions? Do they arise because one is a consenting member of an organization—a stamp of, what Michael Sandel calls, a “collective identity?” Are they modeled by parents and friends? Are they prescribed by religions or proscribed by laws and regulations? In many cases, such as “do not kill another person out of anger or revenge,” it is a moral obligation that shares parentage from many or all of these sources. And we would do well to remember this as we face one of two possible futures: either we humanize our computerized comrades, or we harden ourselves, mechanizing our morals and digitizing our duties in sync with our silicon sidekicks.
Modern military-applied ethics seem to track with Immanuel Kant’s categorical imperative. Kant, contra Bentham and Aristotle, looked not at the effect or the end-state, but instead only one’s motive: whether the action was for the sake of duty alone, without the inclination to acquire or seek out possible psychological fulfillment, financial incentive, or public esteem that one might earn. Kant’s “categorical imperative” can be summarized as this: your action toward another is moral if it is based on a maxim that you would, if you could, enlarge to become the basis for a universal law or command, applicable to everyone everywhere, and which does not contradict itself. In other words, I should sacrifice my immediate safety to secure the safety of others if I would want that same rule to apply unconditionally to everyone else (and therefore benefit me in time of such need).
So, let’s return to our initial questions: Given all that Joe can do, and all that “he” cannot, would it deserve your moral consideration? Would you sacrifice yourself for Joe? Would you honor Joe’s sacrifice? Following Kant’s categorical imperative, it would seem there is a plausible reason to act selflessly for a fellow member of your squad, because you would expect it of them too. Consider how Sebastian Junger described the tight bonds of loyalty and affection among members of an infantry platoon fighting in Afghanistan. He wrote:
As a soldier, the thing you were most scared of was failing your brothers when they needed you, and compared to that, dying was easy. Dying was over with. Cowardice lingered forever. . . . Heroism [on the other hand] is a negation of the self—you’re prepared to lose your own life for the sake of others.
But this raises a series of other sticky and convoluted considerations, like shining a light into a dark cave only to find a series of impossible Penrose stairs from an Escher drawing. First, does it matter that the droid cannot, at anything deeper than the superficial level, relate to the soldier? It cannot fear solitude or pain like you do. It cannot feel pride or comradeship like you do. It has no memory of home, or emotional appreciation of music, like you do. It has no personal relationships with family or friends, like you do. Is our soldierly duty, toward one another, deontological—in other words, the duty, per some cannon or rule, exists unconditionally—and, as Kant held, we perform our duty regardless of the benefit, cost, or who the beneficiary is? Or, is it consequentialist (we act in a certain way because of the effect of acting that way is desirable)? Or does the nature of our moral obligation change depending on context? If so, what contexts actually matter, and why? This brings us to the ultimate question that I believe this thought experiment raises: whether we owe our moral choices or duties toward one another because of who that other is, or because of what he/she/it does.
Under Kantian ethics of the “categorical imperative,” our modern military ethos treats our teammates and comrades and those under our command as ends in themselves, not as means to an end. That is to say, it treats them deontologically—or based on rules of the road, our customs, courtesies, and traditional bonds of brotherhood encoded as our “warrior ethos.”
So would the same care apply to a machine? Under what conditions would or should we act with concern, compassion, and respect toward something whose moral compass has a serial number on it? If we chose not to act with such moral consideration of Joe, would it be because the droid is incapable of provoking our empathy? Or would it be because it cannot suffer like we do? Or would it be because it could not choose—in a biological sense of individual agency—to be there with you? All of these reasons seem sensible. But what does this say about the current state of our soldiers’ ethical indoctrination, which focuses not on the human qualities and capabilities of the other soldier in the foxhole next to you, but instead focuses on loyalty and selfless effort for the group’s welfare (“I will never leave a fallen comrade” and “I will always place mission first”) and a Kantian sense of doing one’s duty for the sake of doing one’s duty?
Ultimately, I have to say there are no easy—no “binary”—answers to these problems of applied ethics. Forgiving each other’s mistakes, honoring others’ successes, and risking ourselves for another’s safety are manifestations of our moral code. They are embedded in our warrior ethos. As C-3PO once warned R2-D2: “Now don’t you forget this! Why I should stick my neck out for you is far beyond my capacity!” If such obligations are also made part of a droid’s code, will we find ourselves, one day, bleeding for them and crying over them too? Will we seek out their opinion as much as we would seek out their translation or calculation of the odds of survival?
Personally, I believe such irrational attachments may be a good thing. For if we, instead, become ever more robotic and mechanical, mirroring the least attractive, most anxiety-producing characteristics of AI, we will have fallen into the very trap Fuller identified a century ago. I’d rather become emotionally attached and mutually dependent on a droid than discard what makes us most human and become morally sterilized in the process.
What I hope to have introduced here is a sense that posing an imaginative, speculative future, albeit one that we can relate to through our current pop culture icons, allows us to confront not just the future of our moral considerations, but the judgments we make today about our relationships with one another, and with the inevitable bright, shiny things we take with us.
At the conclusion of these remarks, the Roundtable participants engaged in a thoughtful and wide-ranging dialogue, on everything from NATO’s forecast of future opportunities and constraints, to what ethics would look like if robots (and only robots) fought our wars for us, to Steven Pinker’s argument that both the frequency and human cost of war (and violence more generally) are declining over the long arc of history, to speculation about what factors might “slow down” the introduction of machine learning onto the battlefield. In addition to my hosts, I wish to thank the other participants and scholars who added depth, breadth, and a fair amount of cheeky British humor to the event.
Image credit: thoroughlyreviewed.com
The answer is, ultimately, does an AI have a unique sense of self, and does it value that individuality? We honor an individual's sacrifice when they place a greater value on an obligation or duty to another over their value of "self". Likewise, we withdraw that honor when someone is perceived to have deliberately "thrown themselves away" — to sacrifice that "self" for something of minimal or no value. When an AI is capable of asking, "Why should I…?" it's reached that point.
Warlock, I may have missed your point but the British suicide prevention organisation, The Samaritans, holds that suicide is a respected choice. Thus those who prefer to die may be spoken to for hours but ultimately, if they still wish to end their life, no guilt is implied, it is their choice, no honour is withdrawn. By contrast, a suicide bomber's desire to perform an act of sacrifice for the sake of what is believed to be a higher moral purpose of spreading a religion is not respected, no mater how much they believed it to be the only true path. In wars the honour goes to the one who performed on one's behalf without a moral question, hence the recent Confederate riots in the USA over the pulling down of statues of their heroes despite the implication of support for immoral slavery. If the Nazi philosophy had succeeded you would have to be pretty secretive about any support for resistant fighters or the UK and US armed forces and the general populace would honour the sacrifice of the German troops. Would an AI machine answer your question with "Oh yes I should because I would be supporting slavery."? One would hope not since then it is rather close to becoming a "Terminator", stopping at nothing until all humans are dead or slaves.