gokhancetinkaya.ai

Simulation: The Missing Layer Between Models and Decisions

A practical essay on dynamics, feedback, and why “good models” still produce disappointing outcomes.

Most applied ML systems do not fail dramatically.

They fail quietly.

At first, everything looks fine. Forecasts are stable. KPIs move in the right direction. Dashboards reassure everyone that the system is “working.”

Then, slowly, side effects appear.

Service quality erodes in places no one was watching. Costs creep up in ways that don’t trace cleanly back to a single decision. Teams start adding manual overrides, buffers, exceptions — not because the model is wrong, but because reality keeps pushing back.

When this happens, the issue is rarely predictive accuracy. It’s that the system is reacting to itself over time.

This is where simulation becomes indispensable — not as an academic exercise, but as a way to reason about consequences before they compound.

Decisions are embedded in systems

A forecast is a number. A decision is an intervention.

Between the two sits a system with memory, constraints, and feedback.

If you have worked on any operational problem, you have seen this firsthand:

None of this fits neatly into a single predictive model.

Even probabilistic models struggle once you introduce batching, delayed effects, human responses, and policies that interact over time.

Yet this is precisely the environment in which real decisions are made.

Simulation exists to operate in that gap.

What simulation actually does

Simulation is often misunderstood as “just Monte Carlo” or “a fancy visualization.” It is neither.

At its core, simulation is structured counterfactual reasoning:

If we follow this policy, in a world with these uncertainties and constraints, what tends to happen over time?

It does not ask “What is the correct prediction?” It asks:

Simulation does not give you the answer. It shows you the shape of the consequences.

Why simulation succeeds where better models don’t

A common instinct in applied ML is to respond to disappointing outcomes by improving the model: more features, more data, better architecture, tighter loss functions.

Sometimes this helps.

Often, it doesn’t — because the failure mode is structural, not statistical.

The decision problem is rarely “minimize error.” It is “minimize regret”: the cost of realizing, weeks later, that a different policy would have avoided churn, shortfall, and an expensive recovery.

Simulation helps because it can hold several realities at once: uncertainty, constraints, policy interactions, and tail events.

And it does this without requiring us to pretend that the system can be fully captured in one elegant equation.

A useful way to think about it:

Simulation lets you be wrong in many small ways, instead of wrong in one big structural way.

That is not a concession. It is rigor of a different kind.

A minimal simulation you can inspect

In the companion notebook, the setup is deliberately boring:

The only difference between the two policies is the utilization target:

Both policies see the same demand process. Neither has a “better forecast.” Yet the distributions of outcomes can diverge sharply once feedback and delays are allowed to do their thing.

A single trajectory is illustrative, but the point shows up most clearly when you run many simulations: the median can look acceptable while the tails become painful.

Notebook: staffing + burnout feedback simulation (Jupyter notebook). It is intentionally simple and focuses on system dynamics (feedback, delays, tails), not on finding an “optimal” policy.

Simulation as the connective tissue

Seen this way, simulation naturally complements other methods:

This is where uncertainty becomes operational.

You can feed simulation with point forecasts, quantiles, ensembles, or posterior samples. What matters is not mathematical purity, but whether decisions are evaluated in an environment that resembles reality.

Why simulation remains underused

If simulation is so powerful, why isn’t it everywhere?

Because it is inconvenient.

Simulation is not just a technical tool. It is an organizational mirror.

Closing thoughts

Models help us understand. Optimization helps us choose. Simulation helps us live with the consequences.

If your system looks good on a dashboard but surprises you in production, the problem is rarely that your model was insufficiently sophisticated.

More often, it’s that you never gave your decisions a chance to fail safely — on your terms — before the real world did it for you.