Monte Carlo Simulation: Elegant, powerful, and usually dangerously misleading.
It’s comical how drug commercials plow through lists of side effects. Pity, then, that software firms peddling Monte Carlo Simulation don’t provide equivalent warnings.
Monte Carlo Simulation (MCS) is an elegant and powerful risk analysis tool for capital projects. If, for example, we are considering new sales productivity software, we face several uncertainties such as the project’s duration and implementation costs, the actual increase in sales and margins from using the tool, and the software’s useful life before we replace it with something bigger and better.
To use MCS, we plug it into our Excel spreadsheet models and tell MCS which variables are uncertain as well as their range of possible outcomes (formally termed ‘probability distribution’). MCS then runs our Excel model hundreds or thousands of times altering the inputs of those variables in each run and tracks the results of every run. In the end the simulator generates very impressive, extremely scientific looking graphs and charts that provide a detailed risk profile of the project.
Too bad those results are likely very, very misleading.
Garbage In – Garbage Out
Whether you enter quality inputs or complete garbage, the software will still produce those impressive graphs, and very few people are properly equipped to use Monte Carlo tools.
There’s a long list of abuses and, to make matters worse, most under-report the actual risks of the decision. Here are just a few of the most common missteps:
- Probability Estimates: MCS is highly dependent on high quality estimates of probability distributions. That’s okay when we can input hard historical data. But most of the time we are asking our experts for educated guesses of probabilities and research amply demonstrates we are abysmal with these educated guesses (See “What Happens In Vegas…”).
- Interactions: I usually see analysts list their variable inputs, then run them all through the analysis. But running too many variables or related variables results in the final reports cancelling out risks that actually exist. For example, a project’s schedule and it’s budget – if we go over schedule we are almost certainly going over budget.
- Playing with Results: Almost without fail, when the analysis reports higher risks than we expected, project teams assume their inputs were “too conservative” and they “fine-tune” their inputs (until the results match their gut). However, when the analysis reports low risk, most teams gratefully accept those results and move on. Once again, we bias our work to under-represent the risks.
Go Big Or Go Home
I have seen Monte Carlo used masterfully in some companies who recognized that shelling out a couple hundred bucks for the software package was only the tip of the iceberg. They invested a great deal more in training their experts in estimating probabilities and their analysts in skillful operation of the tool.
Unfortunately, the firms selling these software packages rarely warn clients of the full extent of training required to generate reliable reports from the tool. As a result,
those marvelous charts and graphs regularly mislead leaders with false assurances that the risks are well understood and probabilities of success are high.
I agree with the premise of your argument, but these issues are not exclusive to Monte Carlo simulations.
– Any quantitative analysis of a capital project will require certain inputs and assumptions, likely provided by the subject matter expert.
– Variables need to be tied together appropriately in any analysis. You’re correct that in most cases variables are interrelated.
– All people will have a natural tendency (or bias) to look for results that confirm their hypothesis. The key here is to have employees that are open to conflicting results and who utilize the contrarian results an invitation to dig deeper – building this culture may be easier said than done.
For these reasons it is always important to first validate all assumptions before moving onto the output of any model. Perhaps, however, it is easier to gloss over bad inputs due to the highly analytical, polished outputs that Monte Carlo simulations provide…user beware.
LikeLike