gokhancetinkaya.ai

From Models to Decisions: A Practical Mental Model for Applied ML

A synthesis essay on prediction, uncertainty, dynamics, and choice in real-world ML systems.

Applied machine learning has matured enough that most teams no longer fail at the basics.

Data pipelines exist. Models train reliably. Validation is understood. Metrics are tracked.

And yet, a strange pattern persists: systems that are technically correct still produce outcomes that feel misaligned with business reality.

Not catastrophically wrong. Just… persistently disappointing.

This is not a tooling problem. It is a framing problem.

The missing distinction

In many ML projects, three very different questions get quietly blended into one:

  1. What can we predict?
  2. How confident are we?
  3. What should we do about it?

These questions sound adjacent, but they live at different conceptual levels. Treating them as interchangeable is the root cause of much confusion in applied ML.

The failure is not mathematical. It is architectural — in how we think.

A four-layer mental model

A useful way to regain clarity is to separate ML-powered systems into four conceptual layers.

Not software layers. Thinking layers.

[ Decisions ]
      ↑
[ Dynamics / Simulation ]
      ↑
[ Uncertainty ]
      ↑
[ Prediction ]

Each layer answers a fundamentally different question.

Prediction: what patterns exist?

This is the domain of machine learning.

Forecasts, classifications, scores, rankings. We ask: given historical data, what tends to happen?

This layer is descriptive. Powerful. Necessary.

But by itself, it is silent about consequences.

Uncertainty: how wrong could we be?

Here we admit epistemic limits.

Distributions instead of points. Intervals instead of single numbers. Ensembles, Bayesian posteriors, empirical quantiles.

This layer does not make decisions safer. It makes ignorance visible.

Which is uncomfortable — but essential.

Dynamics: what happens when we act?

Now time enters the picture.

Actions change the system. Systems react to themselves. Delays, accumulation, thresholds, feedback loops appear.

This is the layer where simulation belongs.

Not to predict the future precisely, but to explore plausible trajectories once decisions interact with uncertainty over time.

This layer is where many “good models” quietly fail.

Decisions: what trade-offs are we willing to make?

Only at this layer do questions become genuinely business-relevant.

Costs, constraints, incentives, service levels, risk tolerance.

Not what is accurate, but what is acceptable.

Decisions are policies, not predictions. They embed values — explicitly or not.

Why this separation matters

Most applied ML systems collapse these layers.

Prediction stands in for decision. Accuracy stands in for value. Automation stands in for judgment.

This works until it doesn’t.

When outcomes drift, teams often react by improving the prediction layer — adding features, retraining models, tightening metrics.

Sometimes that helps. Often it doesn’t, because the mismatch lives higher up.

A different way to measure progress

Once you adopt this layered view, progress looks different.

You stop asking: “Is the model good enough?”

And start asking:

This is a shift from model-centric thinking to system-centric thinking.

A reframing for experienced practitioners

For ML practitioners, this can be a subtle but profound shift.

Your value is not proportional to how complex your model is. It is proportional to how much avoidable regret you remove from the system.

Regret doesn’t come from being a few percent off on a metric. It comes from discovering — too late — that a different decision policy would have behaved better under stress.

That is not a modeling problem. It is a design problem.

How the pieces complement each other

Seen through this lens, familiar methods fall into place:

None is sufficient alone. Each becomes dangerous when mistaken for the whole.

Closing thoughts

This is not a call to abandon ML rigor.

It is a call to place it inside a broader decision framework.

Strong models matter. But models do not make decisions — systems do.

If applied ML is to mature beyond impressive demos and fragile automations, it needs clearer boundaries between prediction, uncertainty, dynamics, and choice.

That separation is not academic. It is the difference between systems that merely work — and systems that endure.