On a series of recurring sketches on Little Britain, monotone employee Carol Beer types away on her computer keyboard in response to a customer’s request before delivering her catchphrase—“Computer says no.” The joke works because we have all encountered people like Carol, whose deference to existing processes, rules, and computer programs allows them to abdicate all responsibility for customer service. It’s funny—until you realize that this is how the machines win.

The Singularity

In his 2005 book The Singularity Is Near, Ray Kurzweil posits a future when we will no longer be able to distinguish between man and machine, or between reality and its virtual twin. In the years since Kurzweil touted his vision, components have continued to fall into place—rapidly improving and increasingly pervasive virtual and augmented reality, including home entertainment devices; more biomechanical implants that transform people into cyborgs; and artificial intelligence that at least supplements, if not supplants, human endeavors. However, Kurzweil’s prediction is less a warning than a statement of fact: “If you wonder what will remain unequivocally human in such a world, it’s simply this quality: ours is the species that inherently seeks to extend its physical and mental reach beyond current limitations.”

While Kurzweil sees a gradual merger between man and machine, blurring the lines on what it means to be human, and media dramatizes the prospect of artificial intelligence overwhelming human capability, a more immediate danger is often ignored—that humans simply cede control to automation. This possibility has a lower threshold than a merger or hostile takeover, and we can already see the evidence—when assistive automation or artificial intelligence is just complex enough that we give up trying to understand it, and instead default to unearned trust.

This is an issue with which we must grapple in a military context. One way of looking at the conduct of war is as an effort shorten the time it takes for us to complete a decision cycle—observe-orient-decide-act (known as the OODA loop) or “the kill chain”—while disrupting or slowing the enemy’s decision cycle. The use of artificial intelligence to shorten this loop is now at the forefront of our science and technology initiatives, with some intelligent systems already being assessed to assist soldiers in each phase of the decision cycle.

Automation bias is a double-edged sword. There are two ways to react to the confrontation between our own understanding and what we assume some expert has created: we can either reject that creation and its output or accept it. We either distrust the automation implicitly, which leads to poor adoption and diffusion, and a waste of taxpayer dollars used to field a system that will not be used, or we trust the automation and lose the ability or inclination to second-guess a fallible system. In the spirit of adoption and human-machine cooperation, we could make AI understandable, but that has its limitations as we see more and more examples of AI’s “black box”—the inner workings that even an expert might have trouble understanding—and general users have not a chance of following along, even if they had the time and propensity.

Systems of Authority

“Some system of authority is a requirement of all communal living,” wrote psychologist Stanley Milgram in 1963. This same imperative is reflected in the work of Greek philosophers, whose great chain of being depicted a hierarchy topped by God and included, in descending order, angels, mankind, animals, plants, and minerals. As humans, it is hard to argue our place in that hierarchy—or what it was, before now. A computer, as an inanimate object, might once have ranked among the minerals, but with increasing complexity, it is now arguably closing in on humans, with projections that it will soon ascend beyond our grasp. In fact, in popular culture, artificial intelligence is often given a God‑adjacent role.

The prevailing theory on organizational authority is that it is derived from five sources: (1) legitimate or positional power, (2) expert power derived from possessing knowledge, (3) referent power derived from interpersonal relationships, (4) coercive power derived from ability to influence others, and (5) reward power derived from the ability to influence allocation of incentives. As yet, no one reports to an artificial-agent supervisor. The primary expression of authority in a human-agent relationship is one of expert power. The agent possesses some knowledge or ability that the human does not, and the human defers to the agent to perform that function. It is rare that a human with sufficient expertise to audit the agent’s output will be paired with that agent.

Some of the other power sources are also worth considering. For example, an artificial intelligence that is capable of optimizing against a measurable goal, such as the number of people who are convinced to click on a link, could derive authority based on this ability to influence others. And referent power already operates because people trust artificial intelligence more from a company whose motives we trust over one we mistrust.

We Have Nothing to Do with the Robot

The Luddites, a nineteenth-century English workers’ movement known for sabotaging textile factories, objected to the role of machines in industry. But they were not anti‑technology—some of them had even invented machines to improve their craft. For the sake of increased production, factory owners wanted workers to aid the machines, rather than the machines to aid the workers. A similar situation exists today, where creative work feeds algorithms that produce on-demand content. As writers and visual artists push back on their work being used to train AI without permission, companies have started to hire writers to write new texts to replace the purloined content, as well as to soften the large language models’ rigidly formal style.

We do not yet know how to relate to assistive automation and artificial intelligence. Take, for example, a news story about an incident with a chess-playing robot during a tournament: “A robot broke a child’s finger—this is, of course, bad,” a tournament organizer said. “The robot was rented by us, it has been exhibited in many places by specialists for a long time. Apparently, the operators overlooked some flaws. The child made a move, and after that it is necessary to give time for the robot to respond, but the boy hurried, the robot grabbed him. We have nothing to do with the robot.” Implicit throughout the statement is an evident desire to avoid taking any responsibility for what occurred. When the video was posted on Telegram, the channel described it differently: “The robot did not like such a rush—he grabbed the boy’s index finger and squeezed it hard.” This description clearly gives the robot agency and attributes a behavioral cause to the incident. The message—responsibility, and thus authority, was ceded to the robot.

Reality Check

One noted roadblock to the acceptance of technology in the tactical setting is the knowledge of what is at stake. The Army maintains a human in the loop whenever autonomous systems are in use, which means the commander’s decision is his or her own. In an ideal situation, a commander will both understand and accept the output of an automated analysis. However, the commander could reject the output, and find his or her own judgment results in unexpected casualties, or accept the output, and find the computer’s estimation errant. As the OODA loop shortens, our commanders have less time to grasp the logic of an agent and feel pressured to make a decision based on an incomplete, inaccurate, or misinformed understanding. For a human-agent team, the activation of trust may occur over time and experience, but other factors may also affect it, such as the length and breadth of the commander’s experience. An inexperienced leader may defer more to the agent than an old hand who damn well knows better.

My mother once took a book she was intending to purchase to the register, where a cashier informed her that the paperback she wanted to buy cost over a thousand dollars. When my mother pointed out the statement’s absurdity, this real‑life Carol Beer shrugged and pointed to the monitor and the price that resulted from the book’s scan. Cashiers’ employment is based on the ability to scan a product and report the price—not to question the computer’s output. If we accept that computers are now smarter than us, we lose our ability to differentiate logic from nonsense. As noted by Kurzweil, “There will be no distinction, post-singularity, between human and machine or between physical and virtual reality.” This puts us in the same position as the main character in Franz Kafka’s The Trial, the story of a man at the mercy of an invisible and capricious authority. “You don’t need to accept everything as true,” the man is advised, “you only have to accept it as necessary.” If the machine is accepted as the authority and we no longer critically engage with the way that it functions, then when it makes an error, we must acknowledge that as our reality.

With life and limb on the line, however, the military must find other options. We must find a way to put these computers in their place, just as the Luddites sought to—and ensure that they help us, not the other way around.

Thom Hawkins is a project officer for artificial intelligence and data strategy with US Army Project Manager Mission Command. Mr. Hawkins specializes in AI-enabling infrastructure and adoption of AI-driven decision aids.

The views expressed are those of the author and do not reflect the official position of the United States Military Academy, Department of the Army, or Department of Defense.

Image credit: Daniel Lafontaine, US Army Combat Capabilities Development Command C5ISR Center