As we rounded the corner into the new year, our media feeds were once again bombarded by that annual tradition: retrospectives on the past year, its lessons learned, and predictions of what’s to come in the year that follows. The latter category is sure to come with cautionary caveats—namely, that “prediction is difficult, especially about the future.” It’s an old saw—variously attributed to Neils Bohr, Yogi Berra, Mark Twain, and others—often trotted out by analysts when asked to opine what the future holds. And as the dramatic events in Ukraine have illustrated over the past two years, it’s especially applicable to forecasts of war.
The future is fundamentally unknowable, so why bother making forecasts about war or any other phenomena?
Forecasting, which is closely related to but distinct from prediction, allows us to unpack our assumptions of how the world works, consider and contrast outcomes driven by alternative assumptions, and reduce or at least highlight uncertainties. Despite the particularly unknowable nature of the international system and the yet-to-be-made decisions of its leading figures, there are some things we can know. Even better, we know why we know them.
To paraphrase another quote that has been misattributed to Mark Twain, it isn’t what we know that creates trouble, but what we know for sure that just ain’t so. In other words, it is important to distinguish between what we know and what is conventionally believed to be true but isn’t. We can use forecasting to do so. While forecasts fall far short of predicting the future to a tee, they can assist with long-term planning and help us better prepare for whatever futures may come.
Forecasting vs. Point Prediction
At the risk of waxing pedantic, prediction and forecasting are different, even if they are often treated as synonyms. Prediction is about making a specific statement about the future—an assertion that the United States will fight a war with China in 2025, for instance. A prediction’s emphasis is typically on the outcome in question. Forecasting is about stating a specific set of assumptions, which then lead to an outcome. Its emphasis is on understanding processes that lead to outcomes. It is a narrow but critical distinction.
When we make an accurate prediction, it may be because we have some deep understanding of how the world works and can understand the future. It could also be that we were just lucky. Or it may be that we have a framework that works now—but, aside from Newtonian physics and similar clock-like phenomena, that is no guarantee that our framework will continue to perform well in the future.
When we make an accurate forecast, we know so because we can trace its assumptions through a set of causal relationships to an outcome. And if our forecasts start to lose accuracy, we can interrogate our assumptions and adjust them accordingly. But more importantly, accuracy is far less relevant as an indicator of a forecast’s quality than it is for a prediction. Much simply cannot be known about the future. In these cases, rather than seeking predictive accuracy, we forecast across a range of plausible scenarios and draw lessons from the similarities and differences in their outcomes.
Knowability
As one of us describes in a chapter in a forthcoming volume on the future of war, edited by Jeffrey Michaels and Tim Sweijs, and as we both have learned from an ongoing forecast validation exercise conducted with our colleagues, future trends can be characterized by what we and our colleague Jonathan Moyer refer to as patterns of (un)knowability. In other words, there are characteristics that can make a trend particularly difficult to forecast.
Conceptual and computational complexity, respectively, describe how complicated it is to trace or mathematically describe something. Measurability describes how easily we can quantify something. Equilibration describes whether a trend possesses an internal balancing mechanism or at least exhibits a well-understood oscillation. Events that can be predicted accurately in aggregate but not individually—say, that 50 percent of all marriages in the United States this year will eventually end in divorce, but we don’t know which marriages—are described by stochasticity. Finally, how easily a trend or event can be affected by an individual or a small group’s actions describes its tractability.
War is particularly complex with its myriad actors and actions, and actors acting to thwart or support others’ actions amid imperfect information. We can arbitrarily choose to classify wars as conflicts with more than one thousand battle deaths. But any arbitrary cutoff point is likely to affect our analyses, and it is not even clear that an indicator like battle deaths or something like territory exchanged is even the appropriate measure. There is no clear supply or demand for war or another mechanism beyond complete annihilation that would lead toward stable equilibria. (Short-term stalemates, such as the present stalemate in Ukraine, may occur, but beyond force—and political—exhaustion, such as during the Korean War, they are fleeting.) The best efforts to statistically model war onset have found their occurrence to be stochastic. And the concentration of military resources into the hands of one or a few national leaders in most countries makes starting a war eminently tractable relative to, say, reversing a demographic decline.
But while war itself may be particularly unknowable, many of the structural pressures that make war more or less likely are knowable. These include relative balances of power, global economic growth, and growing populations faced with increased resource scarcity, among other “lateral pressures”—or forces pressuring a society toward war. In other words, we do not know that war will happen (a prediction), but we do understand what will make it more or less likely (a forecast).
Forecasting in Action
Our team has used forecasts to better understand the implications of the rise and eventual decline of China, developments intertwined with our forecasted relative US decline. We have used them to better understand postconflict recovery pathways in Yemen and Ukraine. And, among many other topics, we have used them to anticipate geopolitical shifts in the wake of Russia’s full-scale invasion of Ukraine.
While these forecasts can give us an idea of what to expect for the future, we can also act on our forecasts to make sure they don’t come true. This is referred to as a self-denying forecast or, more colloquially, a self-defeating prophecy. An example might be climate change, assuming world leaders can join together and collectively make drastic reductions to carbon emissions. Another might be avoiding a US-China war: if all signs lead to war, leaders may actively seek to ease tensions, taking advantage of the tractability of whatever processes or trends they can influence to counteract the pressures leading to war. If war does not happen, predictions of war will have failed, but forecasts of war would arguably be a success in that they helped us understand what was leading to war and how to stop it.
Forecasting’s value, then, is in the ways it is distinguished from predictions. A prognostication that focuses primarily on a specific outcome is most likely a prediction. If it involves a phrase like “my gut tells me,” that is not indicative of the systematic analysis and clear fleshing out of assumptions that is central to forecasting. It may be an educated guess, but it’s a guess.
There are fundamental questions that offer an excellent starting point from which to build a forecasting model. What are the factors you have a good handle on? What are the factors you don’t have a good handle on? How might the former combine with the latter under alternative assumptions? What outcomes would that lead to? And what would be similar and different across those outcomes? By addressing these questions, prediction about the future will still be difficult, but forecasting will be less so.
Collin Meisel is the associate director of geopolitical analysis at the University of Denver’s Frederick S. Pardee Center for International Futures. He is also a geopolitics and modeling expert at The Hague Centre for Strategic Studies and a nonresident fellow with the Henry L. Stimson Center’s Strategic Foresight Hub.
Caleb Petry is a model development research associate at the University of Denver’s Frederick S. Pardee Center for International Futures.
The views expressed are those of the authors and do not reflect the official position of the United States Military Academy, Department of the Army, or Department of Defense.
Image credit: Cpl. Patrick Crosley, US Marine Corps