66 pointsby KraftyOne8 hours ago7 comments
  • 12_throw_away7 hours ago
    No, determinstic scheduling is not a property of async python.

    Yes, the stdlib asyncio event loop does have deterministic scheduling, but that's an implementation detail and I would not rely on it for anything critical. Other event loops - for instance trio [1] - explicitly randomize startup order so that you won't accidentally write code that relies on it.

    [1] https://github.com/python-trio/trio/issues/32

    • StableAlkyne7 hours ago
      > but that's an implementation detail

      That sounds familiar...

      https://stackoverflow.com/questions/39980323/are-dictionarie...

    • KraftyOne7 hours ago
      It's been a stable (and documented) behavior of the Python standard library for almost a decade now. It's possible it may change--nothing is ever set in stone--but that would be a large change in Python that would come with plenty of warning and time for adjustment.
      • 9dev6 hours ago
        And then one day, Astral creates a new Python implementation in Rust or something that is way faster and all the rage, but does this particular thing different than CPython. Whoops, you can’t use that runtime, because you now have cursed parts in your codebase that produce nondeterministic behaviour you can’t really find a reason for.
        • wavemode7 minutes ago
          If I know anything about the Python community - that new runtime would simply never gain significant traction, due to the incompatibility.
        • stuartjohnson125 hours ago
          and then all the serverless platforms will start using Astral's new rust-based runtime to reduce cold starts, and in theory it's identical, except half of packages now don't work and it's very hard to anticipate which ones will and will not and behold! You have achieved Deno
        • ubercore6 hours ago
          That's a bit what it felt like when I was learning Rust async.

          I get it, but "ecosystems" of async runtimes have a pretty big cost.

        • LtWorf2 hours ago
          If the python core team cared about not breaking things I wouldn't need to run my tests on all versions of python.
      • farsa6 hours ago
        Well, in my early days programming python I made a lot(!!) of code assuming non-concurrent execution, but some of that code will break in the future with GIL removal. Hopefully the Python devs keep these important changes as opt-ins.
    • game_the0ry22 minutes ago
      I just realized how little I know about how async event loops.
    • mort964 hours ago
      How do you differentiate between something that "happens to work due to an implementation detail" and a "proper feature that's specified to work" in a language without a specification?
      • nhumrich3 hours ago
        In a language without a spec? You don't. But python has a very strong spec.
      • BrenBarn4 hours ago
        There's still documentation.
  • whinvik7 hours ago
    Is this guaranteed by the async specification? Or is this just current behavior which could be changed in a future update. Feels like a brittle dependency if its not part of the spec.
    • KraftyOne7 hours ago
      It's documented behavior for the low-level API (e.g. asyncio.call_soon https://docs.python.org/3/library/asyncio-eventloop.html#asy...). More broadly, this has been a stable behavior of the Python standard library for almost a decade now. If it does change, that would be a huge behavioral change that would come with plenty of warning and time for adjustment.
      • btilly6 hours ago
        In my experience, developers who rely on precise and relatively obscure corner cases, tend to assume that they are more stable than they later prove to be. I've been that developer, and I've been burned because of it.

        Even more painfully, I've been the maintenance programmer who was burned because some OTHER programmer trusted such a feature. And then it was my job to figure out the hidden assumption after it broke, long after the original programmer was gone. You know the old saying that you have to be twice as clever to debug code, as you need to be to write it? Debugging another person's clever and poorly commented tricks is no fun!

        I'd therefore trust this feature a lot less than you appear to. I'd be tempted to instead wrap the existing loop with a new loop to which I can add instrumentation etc. It's more work. But if it breaks, it will be clear why it broke.

  • jpollock7 hours ago
    That's deterministic dispatch, as soon as it forks or communicates, it is non deterministic again?

    Don't you need something like a network clock to get deterministic replay?

    It can't use immediate return on replay, or else the order will change.

    This makes me twitchy. The dependencies should be better modelled, and idempotency used instead of logging and caching.

  • annexrichmond3 hours ago
    This reminds me of this great talk from Temporal about how they built their Python SDK by creating a distributed deterministic event loop on top of asyncio[1]

    [1] https://www.youtube.com/watch?v=wEbUzMYlAAI

  • arn3n8 hours ago
    While not production ready, I’ve been happily surprised at this functionality when building with it. I love my interpreters to be deterministic, or when random to be explicitly seeded. It makes debugging much easier when I can rerun the same program multiple times and expect identical results.
    • frizlab8 hours ago
      Interestingly I think things that should not be deterministic should actually forced not to be.

      Swift for instance will explicitly make iterating on a dictionary not deterministic (by randomizing the iteration), in order to catch weird bugs early if a client relies (knowingly or not) on the specific order the elements of the dictionary are ordered.

      • lilyball7 hours ago
        This claim sounds vaguely familiar to me (though the documentation on Dictionary does not state any reason for why the iteration order is unpredictable), though the more common reason for languages to have unstable hash table iteration orders is as a consequence of protection against hash flooding, malicious input causing all keys to hash to the same bucket (because iteration order is dependent on bucket order).
        • frizlab3 hours ago
          Oh yeah you’re right, apparently the main reason was to avoid hash-flooding attacks[1].

          I do seem to remember there was a claim regarding the fact that it also prevented a certain class of errors (that I mentioned earlier), but I cannot find the source again, so it might just be my memory playing tricks on me.

          [1] https://forums.swift.org/t/psa-the-stdlib-now-uses-randomly-...

      • saidinesh57 hours ago
        One more reason for randomizing hash table iteration was to prevent Denial of service attacks:

        https://lukasmartinelli.ch/web/2014/11/17/php-dos-attack-rev...

  • lexicality8 hours ago
    > This makes it possible to write simple code that’s both concurrent and safe.

    Yeah, great, my hello world program is deterministic.

    What happens when you introduce I/O? Is every network call deterministic? Can you depend on reading a file taking the same amount of time and being woken up by the scheduler in the same order every time?

    • PufPufPuf7 hours ago
      This is about durable execution -- being able to resume execution "from the middle", which is often done by executing from the beginning but skipping external calls. Second time around, the I/O is exactly replayed from stored values, and the "deterministic" part only refers to the async scheduler which behaves the same as long as the results are the same.

      Coincidentally I have been experimenting with something very similar in JavaScript in the past and there the scheduler also has the same property.

    • TeMPOraL6 hours ago
      No, but determinism reduces the number of stones you need to turn over when debugging hairy problems such as your program occasionally returning different results for the same inputs. You may not have control over the timing of I/O operations or order of external events (including OS scheduler), but at least you know that your side of the innovation/response is, in isoaltion, behaving predictably.
    • KraftyOne8 hours ago
      That's the cool thing about this behavior--it doesn't matter how complex your program is, your async functions start in the same order they're called (though after that, they may interleave and finish in any order).
      • lexicality7 hours ago
        Only for tasks that are created in synchronous code. If you start two tasks that each make a web request and then start a new task with the result of that request you will immediately lose ordering.
        • KraftyOne7 hours ago
          Yes, this only applies for tasks created from the same (sync or async) function. If tasks are creating other tasks, anything is possible.
  • Sim-In-Silico2 hours ago
    [dead]