If you could invite anyone to a dinner party, alive or dead, who would it be? Ghandi? Albert Einstein? Dorothy Parker? Just imagine the conversation! Walter Landor did, in the 1820s, in a series of published “Imaginary Conversations.” In these dialogues, Socrates talks with Cicero, Shakespeare with Ben Jonson, and Michelangelo with Raphael. Landor (as did others who worked in this form) put in the effort to represent the figures appropriate to their respective personalities and philosophical stances, like a fantasy league for intellectuals.
Today, we can prompt a large language model (LLM) and, assuming that the corpus it was trained on includes work by and about the selected individuals, it can create a passable dialogue in seconds. (Thanks anyway for your labors, Walter—you could have saved yourself the effort if you simply waited two hundred years.) The result won’t necessarily generate new insights—yet—but the models are improving daily.
It’s also possible to create a specific AI agent, based on an LLM foundation, fine-tuned with an additional corpus to react and respond to queries as a particular person (e.g., a philosopher) might. Separating the dialoguing philosophers into individual agents makes the resulting discussion more authentic because the agents are independent and reacting genuinely—that is, the dialogue is not constructed from a single source, but each response from an agent is a reaction to the other agent’s input. The difference is like the one between scripted acting and improv. The actors in a play are saying the right thing because someone else has set them up for those lines. In improv, actors are responding in the moment—the lines are not planned in advance by anyone.
The appeal of inviting great minds. whether living or dead, over to dinner is not limited to the passive observation of their conversations, but extends to the prospect of being an active interlocutor with the world’s shiniest people. AI agents can provide that stimulation—you can ask Socrates how he feels about Keeping Up with the Kardashians, if you so desire. If you’re not just in it for the likes and LOLs, however, AI agents have some serious applications.
In a military context, for example, you might have an AI agent that identifies a target through drone footage, another built to track the target as it moves, another yet to predict where the target will be at a given time, etc. It may sound like individual pieces of software, but the agents can also communicate for seamless operations, directed by a common goal, bolstered by their relative strengths while mitigating collective weakness.
In his 1949 book, Human Behavior and the Principle of Least Effort, George Kingsley Zipf introduces the notions of a “speaker’s economy” and a “listener’s economy.” Speakers wants to preserve their resources—words—by communicating their ideas in as few words as possible. (It’s certain that you know people for whom you suspect this not to be the case.) By this measure, speakers are more likely to use broad or vague terms to maintain a mass appeal. Part of the reason politicians are prone to use words like “freedom,” according to George Orwell in his essay “Politics and the English Language,” is to weaponize the ambiguity of the term—everyone supports “freedom,” even if each person who hears the word fits it into the context of his or her own situation. In contrast, the listener, who has the job of understanding the message, wants more words—not necessarily as in the quantity of words being spoken at them, but rather the number of words that are available in a language, with the aim of reducing ambiguity. It is, according to Zipf, these two economies engaged in constant battle that determines the size of a language.
In 2014, Ian Goodfellow devised a technique known as the generative adversarial network, or “GAN” in response to the problem of how to create realistic synthetic data. The idea was to pit two neural networks against each other: a “generator” that would attempt to create the data—an image of a cat, for example—and a “discriminator” that would distinguish between the generated image and a photo of an actual cat. The generator network is penalized each time the discriminator correctly identifies the made-up picture as synthetic. This process continues through hundreds or thousands or more cycles until the discriminator can no longer find the difference. This process, while generally quicker than those Zipf identified in books like Human Behavior and the Principle of Least Effort and The Psycho-Biology of Language, works along similar principles—an evolution toward an optimization point.
If we pit two or more goal-directed AI agents against one another over any task—for example, battle planning—the result would also trend toward a game-theoretic optimization (or at least realize the futility of a no-win scenario). Battles would increase to machine speed, stand-off distance would necessarily increase as a trade-off with time, and the next thing you know we’re all paperclips.
Humans—irrational and irascible—are, as this article goes to print, still involved in warfare. We are agents, too. After all, the neural networks that are the foundation of modern AI are (at least conceptually) based on the human brain. It’s very likely that we’ll see not just homogeneous AI teams, but also heterogeneous teams of humans and AI. Computers have always played a supportive role in human endeavors. We make it do our homework. Now, though, we’re contending with an intelligence that, in many situations, surpasses our own. A survey of AI experts shows a faster path to high-level machine intelligence than to full automation of labor, implying that we’ll be working alongside AI well before it becomes our overlord. We’re in an age of near-peer AI. That’s a different relationship with AI than we have experience previously, a new power dynamic where we might not always be the ones giving orders.
Just as humans are not great at self-assessment, we’re also not great at accepting criticism, even when it’s necessary. Criticism is hard to ignore when it comes from your boss, but if it comes from a computer, well, we can always unplug a computer. Managing that relationship between an evaluator and evaluatee is the subject of many a human resources training module—but there is not yet such a manual for situations where the computer is the one doing the evaluation.
Military AI agents have the potential to be useful in a number of ways, including identifying potential errors in human judgment. Humans, for example, are not great at evaluating their own plans, or even other plans made by humans. We lack objectivity. It’s not that a computer is even better, necessarily, but it doesn’t bring the same context to the problem as a human does. It’s not a perspective, because that would be anthropomorphizing the computer, but it does have a stance based on its programming and training. In a team dynamic, just as each team member has his or her own area of expertise, and the interaction between diverse team members may result in a stronger outcome, the use of AI agents, or a team of AI agents, has already demonstrated its benefits.
The development of AI agent teams represents a significant leap forward in artificial intelligence, mirroring the power of human collaboration while leveraging the unique strengths of AI systems. By bringing together specialized AI agents to work in concert, we are opening up new possibilities for solving complex problems, driving innovation, and enhancing decision-making across countless fields.
As this technology continues to evolve, it will be crucial to address the challenges of interoperability, both between AI agents, as well as between AI and human agents. The future of AI lies not just in the development of more powerful individual systems, but in the creation of collaborative AI ecosystems that can tackle the most pressing challenges of military operations—even if we don’t ask Socrates to weigh in on the plan.
Thom Hawkins is a project officer for artificial intelligence and data strategy with US Army Project Manager Mission Command. He specializes in AI-enabling infrastructure and adoption of AI-driven decision aids.
The views expressed are those of the author and do not reflect the official position of the United States Military Academy, Department of the Army, or Department of Defense.
Image credit: Senior Airman Kadielle Shaw, US Air Force