On a battlefield increasingly shaped by technology, service members and machines are learning to fight side by side. Semiautonomous robots like “Spot,” a quadruped currently integrated into military training at the United States Military Academy, are poised to dramatically influence the changing character of warfare through the ability to prowl about rugged terrain, autonomously avoid obstacles, and conduct crucial reconnaissance tasks in combat training scenarios. Yet despite Spot’s technological prowess, one key question emerges: How do service members learn to trust and use a robot intended to be a teammate?
The rotor noise above roared louder as the helicopter touched down at the landing zone. US Military Academy cadets piled out, ready to perform exercises in the field as part of their summer Cadet Leadership Development Training. Organized in their tactical formation, the cadets gazed up one by one and locked eyes with their new teammate. “All of the sudden it’s like, hey, we have Spot,” one of them noted. “I don’t know what to do with this asset.”
Exercises like these are an integral part of cadet military training and necessitate substantial interpersonal trust and strong cohesion among the team as a whole to be successful. Overall, the 357 participating cadets (97 percent between nineteen and twenty-one years old and 73 percent male) reported extremely high levels of trust in their fellow cadet teammates and high team cohesion.
But what about trust in a robot? Cadets brought a wide range of backgrounds, interests, and opinions. Most were majors in engineering or the social sciences, which included geospatial information science and engineering psychology. Yet roughly three in four cadets had never used a robot in their military training or exercises before, with fewer than 1 percent holding what they described as “a lot” of prior experience with exercises involving robotics. When asked if they thought they could work with a robot to complete a task, most agreed. Even though there was a broad distribution in the cadets’ initial trust of robots (and technology more broadly), they tended to view robots more as a tool and less as a teammate.
The platoon huddled together at the landing zone, the cadets mentally rehearsing plans for their mission. In just a few minutes, they would begin their approach to an objective, several buildings and wooden structures, and execute an offensive strike. Meanwhile, the bright yellow robot dog stared back, awaiting its commands.
Spot was greeted with dozens of eye rolls. “What can it do?” the platoon leader questioned sarcastically, skeptical of Spot’s utility. The cadets laughed among themselves at the same time, similarly pessimistic about the robot’s abilities. To many, it was unimaginable that this “bright yellow thing,” as a cadet described it, would ever be fit for combat.
A multidisciplinary team of human factors and industrial organizational psychology researchers comprised of three senior professors with twenty to thirty-five years of experience in the field, alongside doctoral candidates and undergraduate researchers, introduced the robot into the training exercise to systematically examine human-robot teaming during large-scale military training The researchers had only a couple minutes to explain the robot’s features, doing so without giving too much direction so as not to influence the cadets’ decision on how (if at all) to utilize Spot during the exercise.
Although autonomy exists along a spectrum of capability, ranging from simple navigation routines to more complex independent decision-making, the focus of this study was on how cadets interacted with a robot perceived as an autonomous teammate. The robotic platform used in this research effort is capable of waypoint-based navigation; however, for experimental control and reliability in a graded training environment, autonomy was simulated using the Wizard of Oz approach. Spot would be remotely controlled by a researcher during the mission, and cadets could issue tactical directives to Spot directly with ordinary speech (e.g., crouch behind this tree, show us what’s on the other side of that Humvee). The researcher would then steer Spot as if it were responding to the cadets autonomously. This allowed us to examine human responses to autonomous behavior without confounding variability in user experience or system performance.
Ready for action, cadets trudged through the woodland with Spot by their side. On this lane, cadets were tasked with conducting a raid. Would they set aside their doubts and use the natural language interface to control Spot? Some platoons abandoned Spot entirely, concluding they would surely be better off without the robot. Others used Spot as a mere backup plan, hauling it along in case of a dire situation. But many platoons in the raid lane decided to give Spot a chance. Spot was ready near the objective when the cadets arrived. It was there that the cadets began to direct Spot as if it were, as they noted, “almost like another person . . . filling a spot in formation.”
“Let’s let Spot sneak up,” one cadet suggested, “get eyes on a little bit and help us out.” The cadets watched eagerly from the safety of the objective rally point as Spot trailblazed ahead through the clearing, fearless, facing the enemy head on. Everyone agreed: Spot seemed to be made for reconnaissance. Equipped with a 360-degree horizontal-view camera, the robot was an expert at scouting objectives and pinpointing potential threats. As time passed, these cadets shifted from initial skepticism to reliance, ultimately trusting Spot to complete even some of the most crucial tasks. What’s more, the transformation from tool to teammate was slow yet clear. The robot was earning its spot on the team.
Beyond reconnaissance, Spot assisted by guarding at the front lines as the cadets drew closer. Spot led the way, and the cadets followed. Its ability to seamlessly ascend stairs on the objective made Spot well suited to clear rooms, especially dangerous upper-level areas, of weapons, combatants, and other hazards. Time after time, Spot detected and diverted imminent dangers. Indeed, one cadet recounted that “Spot actually passed through an area that had a trip mine set, which averted what would have otherwise been a casualty.”
After completing the raid lane, the cadets collapsed on the ground exhausted and scarfed down their MREs. Despite their fatigue, they excitedly exchanged their visions for how Spot could be deployed in real warfare between bites: networking and mapping in underground warfare, close-quarters combat, the list went on. The cadets also discussed issues with Spot’s design, as basic as the way it looks and sounds. A noisy, bright yellow robodog isn’t exactly camouflaged on the battlefield. “How much can it carry?” was the most asked question at the landing zone, as mounting a weapon to Spot’s back was the first thought on cadets’ minds. Boston Dynamics, Spot’s manufacturer, has a strict policy against weaponization of the company’s products, however, so this was not permitted during the study. Cadets thought infrared capabilities would be invaluable; although a capability that has been paired with Spot, it was not on the quadruped used in this study. As one cadet explained, “We move at night. We fight at night. And having heat signatures at night, it makes the battlefield completely different. It gives you a complete edge over the enemy.”
Ultimately, Spot’s value extended well beyond merely providing situational awareness. During the training, Spot saved cadets on mission from simulated physical and psychological harm, providing unprecedented protection and ultimately showing promise and capability to save lives in real-world scenarios. Cadets particularly appreciated Spot’s ability to “get visual confirmation to send up to our higher, knowing that we accomplished our mission.” Out of that shared peace of mind and sense of fulfillment, the platoon experienced firsthand how autonomous technologies can extend the scope and safety of missions like never before. One cadet’s comment reflected a broadly shared sentiment: “Thank goodness for robodog.”
That’s one narrative, one exercise, one use case. But a different story, with a very different outcome, was also unfolding within the dense woods of West Point. The raid lane described above involved a quick offensive strike on an objective. Other cadets, however, were assigned an alternative mission: an attack. Unlike the rapid raid, the attack required a wearisome hike, bushwhacking through the thicket as the cadets gradually advanced toward an enemy vehicle.
Faced with the daunting trek ahead, some cadets in the attack lane saw Spot as a potential asset, perhaps even a way to make their mission more exciting. Captivated by the robot’s sleek design and cutting-edge technology, one cadet was particularly optimistic: “He’s like the Terminator.”
But as soon as the exercise began, that enthusiasm started to wane.
Spot was too loud: “Now the bad guys know that we’re coming, so that kind of negated the usefulness of Spot.”
Spot could not navigate the terrain well enough: “It’s almost like a skittish dog. Wish it were more like a Rottweiler that would just run through and jump over stuff.”
Spot was too slow: “I was expecting we could follow the robot, but in fact, it was slower than our pace.”
In the end, Spot simply didn’t meet their needs. The robot was certainly not functioning as a teammate; it was hardly even a tool. Excitement turned to frustration, trust in the robot faded, and the cadets ultimately abandoned it altogether.
So what can we learn from the cadets’ experience with Spot on these two training lanes?
As robots continue to grow in sophistication and autonomy, they will transform the battlefield. This progress signifies more than just an increase in technology on the front lines; it fundamentally alters what it means to be part of a military team. Autonomous robots might soon prove essential to military team operations. As service members gradually modernize their perceptions of robots from being mechanical tools to acting as autonomous agents, military leaders need to better understand the role of human trust in human-machine teams. Likewise, researchers and designers need to investigate how autonomous systems can be designed to create trust affordances—perceivable features and behaviors that signal reliability and transparency to users—as well as how to implement effective trust repair strategies when trust is broken between humans and robots.
In the raid lane, building trust was a challenge. Most cadets did not trust Spot enough to use it until near the end of the exercise. These cadets had to overcome the initial barrier of getting to know and rely on an unfamiliar teammate for the first time. Others never gave Spot a single order, missing the chance to develop trust at all. For many cadets, getting to know Spot was just information overload, insurmountable under the stress of the exercise. These first impressions with the robot matter, but so does the way in which the robot is introduced and integrated into training. In this study, cadets were given free choice to utilize Spot or ignore it completely. Other training may purposefully prepare and prime service members to use the robot, optimizing the conditions for them to then develop trust.
In the attack lane, the challenge wasn’t building trust, but maintaining it. Just a few negative experiences with Spot quickly diminished its perceived capability and eroded trust overall. The designers of robots must listen to the needs of service members and the uses they envision for robots, and this must be an ongoing, iterative process of checking in with new features and evolving end user needs. One of the tenets of human-centered design is to know the user and know the task. In this field study, we’ve begun to understand the users, service members, and their tasks, which are deeply shaped by the environments in which they operate. This was only a start, and with only a small subset of potential use cases for robots with dismounted service members. Each imagined use case must be investigated with the same rigor, to make certain that the robots developed for military use can become the trusted tools and teammates needed in the field.
So, how do service members learn to trust and use a robot teammate? The question was not a theoretical exercise in conceptualizing far-off, futuristic technology; the robots are here. The answer we discovered went far beyond the experience of these cadets with this single robot. Just possessing the necessary capabilities for service members’ tasks is not enough. Robots need user-centric, receptive interfaces that meet service members where they are, doing their part to bolster the trust that is essential for effective use. This means everyone, from designers to engineers to service members, must work together to create operative robots that are worthy of trust and use by the military. Ultimately, a robot’s trustworthiness reflects the cumulative effects of design choices, performance reliability, and the deliberate integration of the system into realistic training and operational environments.
Heidi Segars received her BS from the University of North Carolina at Chapel Hill, where she double majored in neuroscience and psychology as an Honors Carolina Scholar. She currently works as a consultant with SLKone, LLC.
Ericka Rovira holds a PhD in applied experimental psychology and is a professor of engineering psychology in the Department of Behavioral Sciences and Leadership at the United States Military Academy at West Point. Her research investigates human-autonomy teaming in high-risk, complex environments, with a focus on improving trust, reliance, and team cohesion in human robot teams, as well as understanding the role of individual differences in cognition.
The views expressed are those of the authors and do not reflect the official position of the United States Military Academy, Department of the Army, or Department of Defense.
Image credit: Erika Norton, West Point Association of Graduates

