I think this is a subtler point than one might think on first read, which is muddled due to the poorly chosen examples.
Here's a better illustration:
import asyncio
async def child():
print("child start")
await asyncio.sleep(0)
print("child end")
async def parent():
print("parent before")
await child() # <-- awaiting a coroutine (not a task)
print("parent after")
async def other():
for _ in range(5):
print("other")
await asyncio.sleep(0)
async def main():
other_task = asyncio.create_task(other())
parent_task = asyncio.create_task(parent())
await asyncio.gather(other_task, parent_task)
asyncio.run(main())
It prints: other
parent before
child start
other
child end
parent after
other
other
other
So the author's point is that "other" can never appear in-between "parent before" and "child start".Edit: clarification
But isn't it true for JavaScript too? So I don't really get the author's point... am I missing something or the author('s LLM?) forced a moot comparison to JavaScript?
Edit: after reading the examples twice I am 99.9% sure it's slop and flagged it.
Edit2: another article from the same author: https://mergify.com/blog/why-warning-has-no-place-in-modern-...
> This isn’t just text — it’s structured, filterable, and actionable.
My conclusion is that I should ask LLM to write a browser userscript to automatically flag and hide links from this domain for me.
You're right, the equivalent JS script produces the same sequence of outputs.
It turns out there is a way to emulate Python's asyncio.create_task().
Python:
await asyncio.create_task(child())
JavaScript: const childTask = new Promise((resolve) => {
setTimeout(() => child().then(resolve), 0)
})
await childTaskI don't think so. It's been a while since I've bled on tricky async problems in either language, but I'm pretty sure in JS it would be
[...]
parent_before
parent_after
child_before
[...]
In JS, there are microtasks and macrotasks. setTimeout creates macrotasks. `.then` (and therefore `await`) creates microtasks.Microtasks get executed BEFORE macrotasks, but they still get executed AFTER the current call stack is completed.
From OP (and better illustrated by GP's example) Python's surprise is that it's just putting the awaited coroutine into the current call stack. So `await` doesn't guarantee anything is going into a task queue (micro or macro) in python.
That doesn't make sense. That would mean the awaiting function doesn't have access to the result of the Promise (since it can proceed before the Promise is fulfilled), which would break the entire point of promises.
Correct.
> they still get executed AFTER the current call stack is completed.
Correct.
> I'm pretty sure in JS it would be [...]
Your understanding of JS event loop is correct but you reached the wrong conclusion.
Half the article is paragraph headings, the other half is bullet points or numbered lists, if there was anything interesting in the prompt it'd been erased by an LLM which has turned it into an infodump with no perspective, nothing to convey, and I have no ability to tell what if anything might have been important to the author (besides blog clicks and maybe the title).
I really wish we could start recognizing these sooner, I think too many people skim and then go to the comments section but I don't think we really want HN to be a place filled with low value articles just because they're good jumping off points for comments.
I've been flagging them here and then heading over to kagi and marking as slop there. Makes me wish we had something similar here rather than just "flag".
And I know we aren't supposed to comment when we flag, but this feels different to me, like we've got to collectively learn to notice this better or we need better tools.
Here’s a horror story for you.
A few years ago I worked for this startup as a principal engineer. The engineering manager kept touting how he was from XYZ and that he ran Pythonista meetups and how vast his knowledge of Python was. We were building a security product and needed to scan hundreds of thousands of documents quickly so we built a fan out with coroutines. I came on board well into this effort to assist in adding another adapter to this other platform that worked similarly. After seeing all the coroutines, being pickled and stored in S3, so that nodes could “resume” if they crashed yet - not a single create_task was present. All of this awaiting, pickling, attempting to resume, check, stuff, report, pickle, happened synchronously.
When trying to point out the issue with the architecture and getting into a shouting match with Mr. Ego, I was let go.
I won that particular battle but it was uphill all the way, at least he had the grace to afterwards admit that he'd been an idiot.
Ego is the worst thing an engineer can possess.
I find ChatGPT’s style and tone condescending and bland to the point of obfuscating whatever was unique, thoughtful and insightful in the original prompt.
Trying to reverse-engineer the “Not this: That!” phrasing, artificial narrative drama & bizarre use of emphasis to recapture that insight and thought is not something I’m at all enthusiastic to do.
Perhaps a middle ground: HN could support a “prompt” link to the actual creative seed?
In contrast, ChatGPT repeatedly speaks in an authoritative tone which exceeds its own competence by an order of magnitudes.
tiring.
maybe someone will make an "article-to-prompt" sort of reverse ChatGPT?
But of course someone already did that, and of course it's inside ChatGPT, what was I thinking? Though if I do try it, the prompt I get is not especially pleasant to read: https://chatgpt.com/share/6926f33c-8f98-8011-984e-54e49fdbb0...
From the words of ChatGPT itself:
> The post follows a “classic” structure: introduce a common misconception → explain what’s wrong → show concrete examples → give a clear takeaway / conclusion. The paragraphs are well balanced, each chunk delivering a precise logical step. While that’s good for clarity, it also can feel like the “standard template” many AI-based or marketing blog posts tend to follow.
Now, if could just be a very well written blog post too. But I feel that AI just naturally converges to the same basic structure, while a human written post generally will miss one or two of those "good practices" for prose that actually ends up making the blog post more interesting (because we don't always need to have all these structures to make something enjoyable to read).
dead giveaway
Similarly coding is optimized for tutorial-sized code - ignoring exceptions, leaving "IN PRODUCTION DO XYZ" comments, etc...
I suppose the author meant to say that if you first called your async function and then later did `await` you would have different behavior.
The example from this StackOverflow question might be a better demonstration: https://stackoverflow.com/q/63455683
> My mutation block contained no awaits. The only awaits happened before acquiring the lock. Therefore:
> * The critical section was atomic relative to the event loop.
> * No other task could interleave inside the mutation.
> * More locks would not increase safety.
That is exactly the same as with e.g. JS. I'm sure there are lot of subtle differences in how Python does async vs others, but the article fails to illuminate any of it; neither the framing story nor the examples really clarify anything
[1] https://babeljs.io/docs/babel-plugin-transform-async-to-gene...
So in your example the behavior is much more obvious if you sort of desugar it as
async function parent() {
print("parent before");
const p = child();
await p
print("parent after");
} async def parent():
print("parent before")
task = asyncio.create_task(child()) # <-- spawn a task
await task
print("parent after")In Python there's asyncio vs threading, and I feel there's just too much to navigate to quickly get up and running. Do people just somehow learn this just when they need it? Is anyone having fun here?
If you do the following it works as expected
async def child():
print("child start")
await asyncio.sleep(0)
print("child end")
async def parent():
print("parent before")
task = child()
print("parent after")
await task
The real difference is that the coroutine is not going to do _anything_ until it is awaited, but I don't think the asyncio task is really different in a meaningful way. It's just a wrapper with an actual task manager so you can run things 'concurrently'.Python does have two different coroutines, but they're generators and async functions. You can go from one to the other,
Write a simple single-threaded http server that takes strings and hashes them with something slow like bcrypt with a high cost value (or, just sleep before returning).
Write some integration tests (doesn’t have to be fancy) that hammer the server with 10, 100, 1000 requests.
Time how performant (or not performant) the bank of requests are.
Now try to write a threaded server where every request spins up a new thread.
What’s the performance like? (Hint: learn about the global interpreter lock (GIL))
Hmm, maybe you’re creating too many threads? Learn about thread pools and why they’re better for constraining resources.
Is performance better? Try a multiprocessing.Pool instead to defeat the GIL.
Want to try async? Do the same thing! But because the whole point of async is to do as much work on one thread with no idle time, and something like bcrypt is designed to hog the CPU, you’ll want to replace bcrypt with an await asyncio.sleep() to simulate something like a slow network request. If you wanted to use bcrypt in an async function, you’ll definitely want to delegate that work to a multiprocessing.Pool. Try that next.
Learning can be that simple. Read the docs for Thread, multiprocessing, and asyncio. Python docs are usually not very long winded and most importantly they’re more correct than some random person vibe blogging.
async def somethingLongRunning():
...
x = somethingLongRunning()
... other work that will take a lot of time ...
await x # with the expectation that this will be instant if the other work was long enough
That's counterintuitive coming from other languages and seems to defeat one of the key benefits of async/await (easy writing of async operations)?I've seen so many scripts where tasks that should be concurrent weren't simply because the author couldn't be arsed to deal with all the boilerplate needed for async. JavaScript-style async/await solves this.
https://www.reddit.com/r/rust/comments/8aaywk/async_await_in...
- What happens when MyCoroutine() is invoked (as an ordinary function invocation - no await etc.): does execution of the body start right then, or do you just get some sort of awaitable object that can be used to start it later?
- What happens when the result of said invocation is awaited (using your language's await operator plus any extra function calls needed to begin execution): is there a forced yield to the scheduler at any point here, or do you directly start executing the body?
The article seems to be treating the two as an indivisible pair, and thus discussing the behavior merely before and after the pair as a whole, but your link seems to be discussing the first?
Again, I'm quite unsure, so curious if anyone has thoughts.
async function sleep(t) {
return new Promise((resolve) => {
setTimeout (() => resolve(), t)
})
}
async function child() {
console.log("child start")
await sleep(500)
console.log("child end")
}
async function parent () {
console.log("parent start")
await child()
console.log("parent end")
}
parent()
It will print: parent start
child start
child end
parent end
Just like the Python version. This article is confusing the fact that JS has a lot more code that returns a promise than Python and thinks it means the behavior is different. It isn’t.You can roll your own event loop without asyncio by accumulating coroutines in Python and awaiting them in whatever order you want. There is no built-in event loop, however. You can do the same in JavaScript but there you do have a fairly complex event loop (see microtasks) as in there is no running environment without it and if you want a secondary one you have to roll it yourself.
create_task() simply registers a coroutine with the event loop and returns a “future” which basically says “once the main event loop is done awaiting your coroutine this is the ticket to get the result/exception”. That’s the whole magic of an event loop. It is the difference between dropping off a shirt at a dry cleaner and waiting there for them to be done with it (no you aren’t doing the work but you are also not doing anything else), and dropping it off then leaving to get lunch then coming back to wait until the cleaner is done with your pickup ticket in hand (concurrency).
But fundamentally awaiting an async function that doesn’t actually do anything async won’t give you parallelism in a single-threaded environment. More at 11.
It's saying that the action of calling an async function (e.g. one you've written) isn't itself a yield point. The only yield points are places where we the call would block for external events like IO or time - `await asyncio.sleep(100)` would be one of those.
This is true, but surely fairly irrelevant? Any async function call has somewhere in its possible call tree one of those yield points. If it didn't then it wouldn't need to be marked async.
And like ... I take no pleasure in calling that out, because I have been exactly where the author is when they wrote it: dealing with reams of async code that doesn't actually make anything concurrent, droves of engineers convinced that "if my code says async/await then it's automagically performant a la Golang", and complex and buggy async control flows which all wrap synchronous, blocking operations in a threadpool at the bottom anyway.
But it's still wrong and incomplete in several ways.
First, it conflates task creation with deferred task start. Those two behaviors are unrelated. Calling "await asyncfunc()" spins the generator in asyncfunc(); calling "await create_task(asyncfunc())" does, too. Calling "create_task(asyncfunc())" without "await" enqueues asyncfunc() on the task list so that the event loop spins its generator next time control is returned to the loop.
Second, as other commenters have pointed out, it mischaracterizes competing concurrency systems (Loom/C#/JS).
Third, its catchphrase of "you must call create_task() to be concurrent" is incomplete--some very common parts of the stdlib call create_task() for you, e.g. asyncio.gather() and others. Search for "automatically scheduled as a Task" in https://docs.python.org/3/library/asyncio-task.html
Fourth--and this seems like a nitpicky edge case but I've seen a surprising amount of code that ends up depending on it without knowing that it is--"await on coroutine doesn't suspend to the event loop" is only usually true. There are a few special non-Task awaitables that do yield back to the loop (the equivalent of process.nextTick from JavaScript).
To illustrate this, consider the following code:
async def sleep_loop():
while True:
await asyncio.sleep(1)
print("Sleep loop")
async def noop():
return None
async def main():
asyncio.create_task(sleep_loop())
while True:
await noop()
As written, this supports the article's first section: the code will busy-wait forever in while-True-await-noop() and never print "Sleep loop".Related to my first point above, if "await noop()" is replaced with "await create_task(noop())" the code will still busy loop, but will yield/nextTick-equivalent each iteration of the busy loop, so "Sleep loop" will be printed. Good so far.
But what if "await noop()" is replaced with "await asyncio.sleep(0)"? asyncio.sleep is special: it's a regular pure-python "async def", but it uses a pair of async intrinsic behaviors (a tasks.coroutine whose body is just "yield" for sleep-0, or a asyncio.Future for sleep-nonzero). Even if the busy-wait is awaiting sleep-0 and no futures/tasks are being touched, it still yields. This special behavior confuses several of the examples in the article's code, since "await returns-right-away" and "await asyncio.sleep(0)" are not behaviorally equivalent.
Similarly, if "await noop()" is replaced with "await asyncio.futures.Future()", the task runs. This hints at the real Python asyncio maxims (which, credit where it's due, the article gets pretty close to!):
Async operations in Python can only interleave (and thus be concurrent) if a given coroutine's stack calls "await" on:
1. A non-completed future.
2. An internal intrinsic awaitable which yields to the loop.
3. One of a few special Python function forms which are treated equivalently to the above.
Tasks do two things:
1. Schedule a coroutine to be "await"ed by the event loop itself when it is next yielded to.
2. Provide a Future-based handle that can optionally be used to directly wait for that coroutine's completion when the loop runs it.
(As underlined in the article) everything interesting with Python's async concurrency uses Tasks.
Wrapping Tasks are often automatically/implicitly created by the stdlib or other functions that run supplied coroutines.await asyncio.sleep(0)
doesn't yield control back to an event loop, then what the heck is it actually for?