Have you tried a Monte Carlo Simulation?
Most project managers estimate schedules the same way: pick a number, add a buffer, and hope for the best.
The problem isn’t laziness. It’s that a single number can’t capture uncertainty. Your development phase might take 15 days if everything clicks, or 40 days if your lead developer gets pulled and scope creeps. One estimate hides that entire range.
Monte Carlo simulation solves this. Instead of asking “when will we finish?”, it asks “what are the chances we finish by a given date?”… and it answers that question by running your project thousands of times and showing you the full distribution of outcomes.
In this guide I’ll walk you through exactly how it works, what the outputs mean, and how to use it in real project conversations. There’s also a free simulator at the bottom so you can try it with your own numbers before you close this tab.
What Monte Carlo Simulation Actually Is
Monte Carlo simulation is a quantitative risk analysis technique. That’s the PMI definition — and it’s accurate but not very helpful without context.
Here’s the plain-English version.
You give the tool a range of possible durations for each phase of your project — not one number, but three: optimistic, most likely, and pessimistic. The simulator then runs your project thousands of times, each time randomly sampling a duration from within those ranges. It records the result of every run and builds a picture of all the possible outcomes.
The result isn’t one finish date. It’s a probability distribution of finish dates. You can see not just when you’ll likely finish, but how confident you can be in that date.
That’s a fundamentally different, and more honest, way to communicate schedule.
Why One Number is Always Wrong
Here’s a thought experiment.
You have four project phases. Each one has a most likely duration of 10 days. You add them up: 40 days total. Simple.
But for your project to finish in exactly 40 days, every single phase has to finish in exactly 10 days. No delays. No surprises. Everything is perfect across every phase simultaneously.
That almost never happens.
This phenomenon has a name: merge bias. When you chain uncertain tasks together, the probability mathematics work against you. Bad days compound in ways good days don’t. Some phases run long while others can only run so short… the floor is rigid, but the ceiling isn’t.
The result is that your most likely total is almost always optimistic. Not because you’re bad at estimating, but because probability is working against you.
Monte Carlo shows you this. The histogram it produces is almost always right-skewed… a hard left edge where the optimistic scenarios cluster, and a long right tail where the bad scenarios drag the distribution out. That shape is not a bug. It’s the truth about your project.
How to Enter Your Estimates (The Right Way)
The most common mistake first-time users make is listing every task in their project. Resist that instinct.
For Monte Carlo to be useful, especially for learning, you want three to five phases. Think of how your project naturally breaks into chunks with different risk profiles:
A relatively predictable planning phase. An execution phase with high uncertainty. A testing or review phase that depends on what execution produced. A launch or handoff phase that’s usually short but can slip.
For each phase, you enter three numbers:
Optimistic — Best case. Everything goes right. No surprises, full team availability, clear requirements, no rework. Be honest here. This is the floor, not the goal.
Most Likely — Realistic middle ground. Normal day, normal hiccups. The outcome you’d bet on if you had to pick one number.
Pessimistic — Bad but not catastrophic. Key resource gets pulled. A vendor delivers late. Requirements shift mid-phase. This is the scenario that happens maybe once every ten projects in that phase, painful, not fatal. If your pessimistic is three times your optimistic, you’re probably calibrated correctly for a high-uncertainty phase.
Here’s what a realistic input looks like for a software rollout:
| Phase | Optimistic | Most Likely | Pessimistic |
|---|---|---|---|
| Requirements & Planning | 5 days | 8 days | 14 days |
| Development | 12 days | 18 days | 30 days |
| Testing & UAT | 4 days | 7 days | 12 days |
| Go-Live | 2 days | 3 days | 5 days |
| Risk Buffer | — | 7 days (20%) | — |
Notice the Development phase has the widest range… that’s intentional. Development is where uncertainty lives. The simulation will reflect that, and your results will show you exactly how much that phase is driving your overall schedule risk.
What the Risk Buffer Is (And Why It's Not Just Padding)
PMI calls it contingency reserve. Most PMs call it a buffer. Either way, it’s extra time you add to absorb uncertainty that isn’t captured in any specific phase estimate.
Think of it like leaving 15 minutes early for an important meeting. It’s not padding, it’s insurance against the things you can’t predict.
A reasonable starting point:
- Low complexity project: 10% of total most likely duration
- Medium complexity: 15-20%
- High complexity or lots of unknowns: 25-30%
The key distinction PMI makes, and that the PMI-RMP® exam tests, is that contingency reserve is for known risks. Things you’ve identified but aren’t sure will happen. It’s a calculated, defensible number, not a gut-feel add-on at the end.
Reading Your Results: P50, P80, and P90
After the Monte Carlo simulation runs, you get four outputs. Here’s what each one means in plain English and when to use it.
P50 — Your median outcome
Half of your simulations finished by this date. It’s the most unbiased single estimate of your project duration. Note that it’s almost always later than the sum of your most likely estimates. That’s merge bias, and now you’ve seen it with your own data.
P70 — Low-stakes internal projects
70% of simulations finished by this date. Reasonable for internal projects where a late delivery is inconvenient but not costly.
P80 — Your commitment date
This is the number that matters most for most projects. 80% of simulations finished by this date. When you tell a stakeholder or a client when the project will be done… use P80. You have an 80% chance of hitting it. You’ve been honest about the remaining 20%. That’s defensible. That’s professional risk communication.
PMI-RMP candidates: P80 is the most commonly cited confidence threshold in schedule risk management. Know it cold.
P90 — High-stakes deadlines
90% confidence. Use this when being late has serious consequences — regulatory deadlines, public launches, contract milestones. You’re buying extra confidence at the cost of a later committed date.
The gap between P50 and P90 is one of the most useful numbers on the page. A narrow gap means your project is relatively predictable. A wide gap means high variance, and that you should either reduce uncertainty upfront or plan very conservatively.
Three Experiments That Teach More Than Any Textbook
The best way to learn from this simulator is to break it deliberately.
Experiment 1 —
The flat estimate test
Enter the same number for optimistic, most likely, and pessimistic on every phase. Run the simulation. The histogram will be very narrow. Now spread the ranges out and re-run. Watch the distribution widen and the P80 date push further right. That movement is uncertainty made visible.
Experiment 2 —
One phase vs. five phases
Enter one phase with an optimistic of 25, most likely of 40, pessimistic of 75. Run it. Then split that same project into five phases with proportional estimates that sum to the same totals. Run it again. The five-phase version will have a wider, more right-skewed distribution, even though the total most-likely duration is identical. That’s merge bias in action.
Experiment 3 —
The buffer experiment
Run your project with 0% buffer. Then 10%. Then 20%. Watch how P80 moves. Notice that a 10% buffer doesn’t move P80 by exactly 10%, it depends on where you are in the tail. That non-linearity is why blanket contingency rules of thumb are less accurate than a properly calibrated simulation.
Each of these takes two minutes and teaches more than most textbook chapters.
What the PMI-RMP® Exam Expects You to Know
If you’re using this tool to study for the PMI-RMP®, here are the concepts it demonstrates:
Triangular distribution — The distribution used in this simulator. Defined by three points: minimum (optimistic), most likely (mode), and maximum (pessimistic). Used when you have expert estimates but no historical data.
Monte Carlo simulation — A quantitative risk analysis technique. Know when to use it (large, complex projects with many uncertainties), its outputs (probability distributions, S-curves, confidence intervals), and its limitations (garbage in, garbage out — quality of the simulation depends entirely on quality of estimates).
S-curve / cumulative probability — The exam may show you a CDF rather than a histogram. Read it by finding the confidence level on the vertical axis and tracing to the date.
Merge bias — Not always named explicitly, but tested implicitly. When multiple uncertain paths converge, the expected duration is longer than the sum of individual most-likely durations. This is why schedule compression is risky.
Contingency reserve — The calculated buffer for known risks. Distinguished from management reserve (unknown risks). The PMI-RMP exam tests this distinction directly.
The One Thing to Take Away
Monte Carlo doesn’t tell you when your project will finish. It tells you the shape of your uncertainty, and that’s far more useful.
A project manager who says “we’ll be done in 40 days” is making a promise they can’t keep. A project manager who says “we have an 80% confidence of finishing by day 48” is communicating something honest, defensible, and useful for stakeholder decision-making.
That’s the shift this tool is designed to create. Not from optimism to pessimism… from single-point thinking to probabilistic thinking.
About 44Risk PM, LLC
This analysis was prepared by 44Risk PM LLC, specializing in PMI-RMP® and PMP® certification training with a focus on practical, real-world risk management.
Contact:
Russ Parker
PMP®, PMI-RMP®, PMI-ACP®
PMI-ATP Instructor – PMP® & PMI-RMP®
Owner, Forty-Four Risk PM, LLC
An Approved PMI-Authorized Training Partner
Connect with me on Linkedin
Subscribe to my YouTube
Find me on Substack
“Stay Proactive Over Reactive”
“The PMI-Authorized Training Partner seal, PMP®, PMI-RMP®, and PMI-ACP® are registered marks of the Project Management Institute, Inc.”
Ready to Prepare for the RMP®?
Use the button below to see my upcoming RMP® Exam Prep Course Schedule.