> Outputs are probabilistic but treated as deterministic.
> [Systems that] replace explicit mechanisms with probabilistic ones.
In other words, many things should be deterministic, not probabilistic. That's why the notion of probabilistic programming never really took off for most application domains.
Probabilistic systems do make sense at the edges — perception, ranking, recommendation, search, fuzzy matching. The problem starts when we let probabilistic outputs cross into domains that used to have hard contracts: policy enforcement, state transitions, or irreversible actions.
What feels new isn’t probabilistic programming itself, but treating probabilistic inference as if it were a deterministic control layer. Once probability collapses into authority, you lose debuggability and guarantees.
So the failure mode isn’t “probabilistic vs deterministic” per se, but where the probabilistic boundary is drawn — and whether it’s explicit.
Sure, and that's why I used the wording most application domains.
> So the failure mode isn’t "probabilistic vs deterministic" per se, but where the probabilistic boundary is drawn -- and whether it’s explicit.
If you take a look at
https://arxiv.org/pdf/2512.22418
it is difficult to draw any boundaries because a dice is rolled at so many stages. Unpredictability is acknowledged but unreliable probabilistic verification is done; requirements and specifications are prompted on the fly in response to unpredictable outputs; debugging is psychologically stochastic too and deterministic tools are used for trying to control stochastic outputs; and so forth and so on.