As the Pentagon pushes for more AI-powered tools, there is a lot of ink being spilled on whether such tools can be trusted. Too often missing from the conversation, however, is something we already know about: building organizational trust. With a little reimagining, that knowledge can be applied to show how we can build trust in human-machine teams. Practitioners and academics have written at length about how critical trust is to organizations, pulling examples from the military, sports, and business to unlock their secrets. One popular framework coming out of this research describes the essential ingredients humans require to build trust as competence, character, and caring.

But what happens when part of that team isn’t human? This framework seems an odd way to describe a relational exchange between a human and technology. As we sit here typing away on a computer thousands of times more powerful than the one used to put a man on the moon, the question of whether to trust this laptop never enters our minds. The utility becomes clearer with the addition of one more appropriately alliterative attribute, communication. By framing each of these four attributes as a question, we can get to the essence of what humans require to trust a technology. Can you do what you say (competence)? Do you do what you say (character)? What are your priorities (caring)? And how do you share understanding (communication)?

Before we explore each of the four questions, however, it’s important to think about who is doing the trusting. Despite the ease with which humans can anthropomorphize artificial intelligence, outside of access controls and cleaning data, machines accept input and direction without the slightest hint of incredulity. Given the implicit trust by our machine counterparts, the need to build trust lies with the humans. But not just those working directly with the technology—the human-machine team must also be trusted by its leadership, adjacent units, and those who are responsible for integrating them into larger organizations. Regardless of whether it is simply a new search algorithm or an autonomous drone, this is the group, extending far beyond an equipment operator, that must gain and maintain trust in the team for it to be successful.

Competence: Can You Do What You Say?

The great thing about well-designed teams is that they are purpose-built to amplify strengths and mitigate weaknesses of each individual. All team members are assigned tasks that best fit their individual capabilities so that the team is collectively strong everywhere. Human-machine teams are no different. Machines excel at computation, executing repetitive rules, and analyzing large datasets for useful correlations. Data analytics systems have no problem calculating a linear regression on millions of points or reducing thousands of excel entries into a statistical failure rate for a piece of equipment. They are also exceedingly effective at maintaining focus on a particular target and meticulously recording everything they see. Unmanned reconnaissance systems can loiter over a target area for days capturing, recording, and storing everything they sense. Conversely, humans are notoriously bad at repetitive computation and sifting through large amounts of data, and they are equally hard-pressed to stare at a target without losing focus or becoming exhausted.

While humans struggle in these areas, we continually outperform machines at thinking creatively, especially in the realms of causal reasoning and counterfactual thinking. We have a natural gift at using questions, especially as a means of hypothesizing about what we see in our environment. Humans also have a natural ability to identify exceptional information—that thing that is different or out of place in their environment. To watch these innate abilities in action, take a four-year-old to the zoo. These skills will be on full display as the child notices and questions everything, working overtime to make sense of all of the novel observations. Put together, these skills make humans well-suited to exercising judgment when operating in volatile or uncertain situations where they have to come up with new plans with limited data—the opposite of our mechanical counterparts.

Generating trust on the team through competence is about improving all team members’ strengths rather than shoring up their weakness with the recognition that a teammate will cover the difference. Technological development should remain focused on evaluating data—faster, more effective computing algorithms, increasing operating time, and gaining efficiencies with data processing, transport, and storage. Humans can then concentrate on improving their creativity, the ability to identify exceptional information, and exercising ethical decision-making. When all team members show up prepared to play the part that best suits their capabilities, demonstrating competence becomes a simple matter. Alternatively, when they are asked to perform outside of their positions, the likelihood of failure increases significantly.

Character: Do You Do What You Say?

Once human-machine teams are task organized around what they do well, the next question is whether they actually execute their assigned roles and responsibilities. For machines, this question centers on availability and reliability. Combat is an unforgiving place filled with Clausewitzian friction and adversaries actively working against you. As the information environment becomes more congested and contested, building resiliency and availability into our advanced technology becomes more critical. There is nothing worse than turning to a teammate that isn’t there.

Only marginally better than complete absence is when a teammate is unreliable. Artificial intelligence and other advanced technologies are incredibly complicated, and their learning models end up only being as good as the training data available to them. While humans can make educated guesses about the differences between operating in woodland and desert environments with the recognition that they may not have all of the data available, AI systems cannot. An AI will respond based on the information it has without any indication that it has been asked to operate outside of its limits. This makes it susceptible to errors accidently introduced by bias inherent in nondiverse training data. Couple this with the “black box” nature of these systems—meaning that they cannot explain the way they arrive at their conclusions—and it makes it hard for their human counterparts to gauge if a machine is operating based on the right information. Finding ways to improve transparency in their processes, or at least alerting an operator when a result may not be as reliable, would increase the ability for people to trust that the machine will either produce a usable result or signal that the upcoming result is not reliable and should be discarded.

On the other side of the screen, humans become good teammates by being digitally disciplined. Like any other form of discipline, controlling habits and behaviors as they work is critical. Digital discipline takes the form of understanding the systems they are using and taking appropriate action based on what they see. This means they are both doing things right—applying technology in a way that gets its best performance—and doing the right things—applying technology in a way that accomplishes the mission.

Failures in discipline take many forms, but two of the most common are complacency and task creep. We’ve already noted that repetitive, mundane, and long-duration tasks are better suited for machines, and complacency is one of the biggest reasons why. Over time, humans naturally lose focus on these tasks, resulting in an increase in errors and introducing unnecessary risk into the situation. Preventing complacency from affecting the mission falls on both the individual performing the tasks to remain engaged and whoever is designing and leading the team. The aviation industry has strict guidelines around how long pilots are allowed to work and what they are allowed to do during critical phases of flight to combat fatigue-induced errors. The industry has also designed avionics to draw attention to specific information when necessary. As we look to enhance the way we design human-machine teams, we need to incorporate the lessons learned from attention-heavy industries to ensure we help people stave off inattention and complacency.

Similar to complacency, task creep also signals a failure in discipline when the tasks that should be aligned to a specific human or machine are shifted onto a different member of the team. This shift leads to overloading certain parts of the team with tasks or giving them tasks for which they are ill-suited. Both scenarios lead to a decrease in performance and introduction of unnecessary risk into the team’s operations. AI prediction engines are a common example of this phenomenon. The insights AI produces are based on analyzing data from the past. While they might help identify trends, they are not reliable for answering “what if” questions or predicting future outcomes. Despite this known weakness, there is no shortage of people trying to give this task to the latest algorithm in the name of data-driven decision-making. Using data to make decisions is great. Having that same tool make the decisions for you is not. While reassigning tasks naturally occurs when teams reorganize or the situation evolves, it should be a deliberate process based on a continual evaluation to ensure the team is still doing things right.

Caring: What Are Your Priorities?

Until they gain sentience machines don’t actually care about anything, but they are programmed with some manner of prioritization. For example, the tit-for-tat program, written to solve the prisoner’s dilemma in Robert Axelrod’s famous 1980 competition, prioritized cooperation in its winning strategy. Similarly, Isaac Asimov’s famous three laws of robotics outline a fictional starting point of a moral code for artificial intelligence. As humans continue to wrestle with the nature of machines and artificial intelligence in warfare and society in general, these sorts of coded priorities will remain key to the level of trust we place in autonomous systems. Part of this trust will come down to the amount of transparency built into the system. Solving this means developers must balance achieving desired outcomes with being able to explain how the algorithm arrived at the conclusion it did.

This need for transparency comes down to the fact that it is still the humans who maintain moral and ethical responsibility for what the team does. Somewhere in that balance is the level of meaningful human control that people are willing to accept for what they are having their teams do. To bolster the effectiveness of human control, virtue-based ethical decision-making should remain central to the training and education we provide to our human workforce and leaders. Armed with a strong moral compass and given clear guidance and intent on the mission, leaders can be far more confident that not only will the mission be fully carried out but that it will be done in a way that supports the bigger picture.

Communication: How Do You Share Understanding?

The final piece to the puzzle of building trust within human-machine teams is through the ways in which the team shares understanding. The fundamental differences between humans and machines means that there is a large gap in native understanding between the two based not only on an uncommon language but a difference in the way each processes information. Data and information lie on a spectrum with human-readable at one end and machine-readable at the other. At the dawn of the computing age, humans had to cross the entire divide translating every input into 1s and 0s and again translate the results once the machine had completed its task. The creation of coding languages served to partially bridge this gap, extending access to more and more people. Over the past few years, machines have begun to meet humans part way through large language models that are able to accept and return recognizable human text.

Closing the gap with their artificial counterparts requires people to become digitally literate—educating themselves on how to provide inputs and interpret outputs as they work with these systems to solve problems and accomplish the mission. Knowing how the technology functions and how to employ it to the greatest result does not require a data science degree any more than a driver needs to be a mechanical engineer to take a road trip. Instead, the driver needs to know how to drive the car and navigate the road system in concert with other motorists to get to the intended destination. Similarly, users of these advanced systems—and those charged with making responsible decisions about their use—should have a basic understanding of the key concepts and how they function. Courses such as West Point’s Digital Literacy 101 are a fantastic starting point to help leaders learn how to be intelligent consumers of data.

Every day more advanced technology is making its way into the hands of users around the world, and they are working with these systems to improve the quantity and quality of what they can accomplish. To fully unlock the potential of human-machine teams, they have to learn how to work together effectively. Using this framework—the 4 Cs—as a baseline for developing this relationship will help people continue to find the best ways to build on the strengths of their teams as we continue to explore what is possible.

Lieutenant Colonel Tom Gaines is currently assigned as the ACoS G6 for 1st Special Forces Command (Airborne). His writing on human creativity, decision-making, and technology can be found in Harvard Business Review and at West Point’s Modern War Institute.

Amanda Mercier is the chief technology officer for 1st Special Forces Command (Airborne). She is an expert in data analytics and machine learning.

The views expressed are those of the authors and do not reflect the official position of the United States Military Academy, Department of the Army, or Department of Defense.

Image credit: Technical Sgt. Luke R Sturm, US Air National Guard