A few weeks ago in my corporate finance class at West Point, I hosted a guest speaker from a Cambodian private equity firm. The lesson focused on valuations and risk. Riskiness in finance relates to the uncertainty of a prediction, and it’s measured with a discount rate. The higher the rate, the riskier the estimate and the less value it would have relative to one with less risk. We analyzed an opportunity to invest in a Cambodian mango plantation. We were shocked when the guest shared that the standard discount rate used for these types of “frontier” investments in Cambodia was 35 percent. To put this into context, analysts predict that Tesla’s discount rate should be about 12 percent. The guest’s comments got us thinking. What are the implications when we have really high discount rates?
When you assume a high level of risk for a project, early cash flows are especially important. You’ll lean towards projects that generate cash quickly. The short term really matters.
But consequently, if you overstate risk, you’ll overstate the importance of short-term profits. Many companies experience short-term losses before realizing long-term gains. Amazon’s recent streak of profitable quarters followed years of losses and slim margins. Only three years ago, Amazon CEO Jeff Bezos was dubbed as the “prophet of no profit.” If you overstate risk when valuing companies that take a little longer to gain profitability, you understate their potential.
This is a simple but powerful idea that extends far beyond the realm of finance. What happens when our discount rates are too high—when we overstate risk—as military leaders? During my first deployment to Afghanistan in 2009, then-commander of ISAF Gen. Stanley McChrystal, issued a Tactical Directive. In it, he cautioned troops to “avoid the trap of winning tactical victories—but suffering strategic defeats.” In other words, take a longer view, because if we worry too much about the immediate battlefield victories, we might actually move ourselves away from our strategic goals. If an insurgent hid in a building and fired a weapon at a coalition helicopter, the pilot could respond in several ways. The pilot might shoot back and destroy the building and the insurgent—and anybody else that’s inside of it. But if the pilot could safely evade and develop the situation further, was it really worth shooting back at the insurgent in the building? Might it be better to wait and confirm civilians were out of harm’s way, particularly in a counterinsurgency that necessities popular support for coalition success? This was the essence of McChrystal’s directive, pushing us towards waiting and confirming. It was a call for recalibrating the way that we thought about risk so we could take a longer view.
What happens to an organization’s culture when things are really risky? In the military, higher risk requires more scrutiny and control. As a 28-year-old company commander, I blessed off on missions that were lower risk, but higher-risk missions were approved by higher-ranking officers. When the risk is appropriately gauged, this makes sense. The vetting process mitigates risk through resource allocation and organizational awareness. But imagine if you’re in an organization that overstates risk? The organization begins to overemphasize the near term, and management gravitates towards those near-term decisions. This means more processes, unnecessary scrutiny, and slower reaction time. Micromanagement and stove-piping are directly linked to inflated discount rates.
These cultural consequences from overstating risk directly conflict with the philosophy of mission command, the principle that enables “disciplined initiative within the commander’s intent to empower agile and adaptive leaders,” officially enshrined in Army doctrine since 2012. In ambiguous, dynamic operating environments, mission command flattens organizational structures so teams can respond with nuance and speed. Mission command is about fostering a culture of trust and empowerment so subordinate leaders can exercise initiative and creativity to meet their commander’s intent. But there are pitfalls. In a recent opinion piece, for instance, Thomas Ricks highlighted a 1992 Military Review article by Lt. Col. James Dubik, in which Dubik cautioned against letting a commander’s intent turn into a detailed concept of operations; intent should leave space for subordinate action. Ricks described the commander’s intent as a “mental guardrail” that can help steer a subordinate commander’s initiative. But what happens to this guardrail if an organization overstates risk? It narrows. We begin prescribing action instead of guidance, and we lose organizational responsiveness and precision. Overstating risk makes us clunky and less agile. It breaks down mission command.
Understating risk can be costly, dangerous, and even deadly. But we also need to consider the consequences of overstating risk. Just this week, Army Chief of Staff Gen. Mark Milley alluded to these consequences, pushing back against the “overly centralized, overly bureaucratic and overly risk averse” mindset that’s spreading throughout Army leadership. If our discount rates are too high, we’ll chase short-term rewards, micromanage personnel and processes, and move away from a philosophy of mission command.
With these consequences in mind, how can we sharpen our discount rates so they more accurately reflect reality? How can we lower the likelihood of overstating risk? In my finance class, we get closer to reality by scrutinizing the assumptions and information that we use to calculate the discount rate. This is exactly what is required to ensure accurate risk appreciation in the military, as well. As leaders, we must think hard about the assumptions and information that factor into our daily risk assessments—our guesses about what somebody else is thinking, how others might react to our decisions, or how things have played out in the past. If we’re willing to gauge risk with nuance and care, we can avoid micromanagement and too much fixation on the near term. If not, we will fail to operate with the flexibility required by the complexity of the modern battlefield.
Image credit: Maj. Amabilia Payen, US Army
If you zoom out on the argument, one could argue that the risk of Afghanistan and Iraq (in 2003) where never worth the reward. In order to disrupt the Taliban and Al Qaeda in Afghanistan, the US didn’t have to reestablish an effective government or do a proxy counter insurgency for an Afghanistan government that was, perhaps, never really a legitimate nation state. The US return in both locations hasn’t been, nor is it likely to ever to be, worth the investment;
Excellent article! You practice what you preach, and our company was significantly more operationally effective deployed because of it. I hope more leaders in our military hear this message. As a well equipped, well trained soldier, with a true understanding of my commanders intent, and knowing they believe in my abilities. I would move heaven and earth to make the mission happen.
Fear is not real. Danger is real. It is very important to distinguish both.
Historically the tilt to overstate risk comes as a reaction to situations where things went bad and a pursuit in how prevent it in the future. The question itself is bias and leads to where we are.
The true question should be: why situation X went bad.
When one breaks down into the details, then it is possible to understand all factors that contributed to the failure.
Then, it is possible to see what factors/variable were not part of the initial risk assessment and it should be for the future (making the Orient of OODA better for all leaders).
Decisions should be analyze with and without the newfound factors/variables to see if they would be sufficient to change the outcome, If not, then it is possible that there is some change in tactics that needs to be taken for future reference.
Finally, there are temporal factors that needs to be called out. Bad things happen and there is nothing you can do about it.
These lessons are way more important in the long run and would prevent an entire class of bad outcomes in the long run. This makes it way more effective than preventive analysis and the usual change of process that comes with it.