Just how reliable are our business case risk assessments?
Las Vegas bets the house on knowing the odds. The probability of rolling seven at the craps table is 1 in 6; the probability of being dealt a royal flush is 1 in 649,740. In our jobs we deal with uncertainty and probabilities every day, which must make us experts at estimating probabilities, right?
GUESS AGAIN
With objective data we can calculate probabilities with great accuracy. But when we rely on our own experience, all the science says that we are bad (actually very bad) at estimating probabilities.
There’s a simple quiz demonstrating this. It’s ten questions such as “How old was Martin Luther King Jr. When he died?” Rather than guessing at the exact answer, participants provide a 90% confidence interval — a range for which they are 90% certain the correct answer falls within (leaving only a 10% chance the correct answer lies outside the range). So, for Martin Luther King Jr.’s age at death one might give a range of 45 to 55. If Martin Luther King Jr. was 49 at his death, they are correct. If he was 56, they are incorrect.
Interestingly, this is not a test of knowledge. It is actually a test for over-confidence… and the results are jaw-dropping. If we were good at understanding the limitations of our knowledge and translating those limits into probabilities we would give ourselves wide ranges and should score an average of 9 out of 10.
How do people usually do? Let’s look at managers, people regularly making decisions under uncertainty. Less than 2% of managers get 9 or 10 answers correct, a horrible track record. Why? Because we are overconfident in thinking we have a good handle on the topic. Added to that, our brain’s are terrible at contemplating what being 90% sure truly means. As a result we make our ranges far too narrow, increasing our error rate considerably.
An experienced Project Manager heard me talking about all this and exclaimed, “Oh! Do me!” At the end of the questions he admitted that, aware of my warnings, he had used much wider ranges than he otherwise would have. We then scored his answers and he still had only 6 correct!
One might think our scores improve when we move from general trivia questions to their area of expertise (e.g. IT, industry, medicine, etc.). Wrong again. That same overconfidence simply makes us narrow our ranges even further (because now we’re into the stuff we “know”) and we end up scoring about the same.
So, what are the implications for decision-makers? Be very skeptical of subjective probability estimates, even by experts. This is particularly true for projects highly vulnerable to uncertain outcomes (e.g. cost over-runs, operating under-performance, and other negative events).
The Good News
While our natural abilities to estimate probabilities are woefully inadequate, it is a skill we can develop with training, practice, and rigorous feedback. Unfortunately, few organizations invest the time and money to train their folk. After all, they’re already the experts… Right?
P.S. If you’re curious about that test (even after all these warnings), you can take it here.
Pingback: Monte Carlo Traps | The CapEx Compass™
Pingback: When Brains Are Not Enough | The CapEx Compass™
Pingback: Monte Carlo Traps – Strategic Bets