The article even mentions an arguably better approach (check on a timer), but for some reasons claims it is worse.
Those integrations are not exactly good designs regardless; simply don't use std::future is the solution, and use non-blocking async mechanisms that can cooperate on the same thread instead. Standard C++ has one albeit somewhat overcomplicated, senders and receivers. Asio also works.
The other option is as you mention polling using a timer, but I don't see how that's better, I'd rather move the work off of the event loop to a thread. And you then have to do the "latency vs. CPU time" tradeoff dance, trying to judge how often to poll vs. how much latency you're willing to accept.
How do you know what timeout to use for the timer? You may end up with tons of unnecessary polling if your timeout is too short, or high latency if your timeout is too long.
>Standard C++ has one albeit somewhat overcomplicated, senders and receivers
*in C++26
adjust based on your minimum expected latency, potentially with exponential backoff. Potentially account for scheduler overhead of the spurious wake-ups.
Basically if you check every millisecond to every 10ms, you should be fine.
> in C++26
it's a library, language support is not necessary.
There are implementations you can use right now.
Look at what they can do when it's clearly a good idea and has the backing of the absolute apex predator experts: reflection. If reflection can fucking sail through and your thing struggles, it's not going to make it.
Andrew Kelley just proposed the first plausible IO monad for a systems language, and if I wanted to stay relevant in async C++ innovation I'd just go copy it. Maybe invert the life/unlift direction.
The coroutines TS is heavily influenced by folly coroutines (or vice versa), a thing with which I have spent many a late night debugging segfaults. Not happening.
Besides, if threads are too slow or big now? Then everything but liburing is.
There are people where modules and co-routines already happened, and there is better debugging experiences out there than gdb.
Things that affect the runtime or ecosystem of tools are obviously more complicated, especially given that those things aren't really covered by the standard.
But yes, do not use std::future except for the most simple tasks.
I have to acknowledge that none of the other ISO languages, including C, are this radical.
That is how we are getting so much warts of lately.
Unfortunelly there doesn't seem to exist any willingness to change this, until it will be too late to matter.
Whatever we end up with, std::future just wasn't a good base for an high performance async story. Still just adding a readiness callback to std::future would make it infinitely more useful even if suboptimal. At least it would be usable where performance is not a concern.
Instead, the comitee attempts to work towards perfect solutions that don't exist, and ends up releasing overengineered stuff that is neither the most convenient, performant, nor efficient solution. Like <random>
The surviving three compilers are already lagging as it is, none of them is fully 100% C++20 compliant, C++23 might only become 100% on two of them, lets see how C++26 compliance turns out to be, meanwhile C++17 parallel algorithms are only fully available in one of them, while the two other ones require TBB and libstdc++ to actually make use of them.
A random(min, max) function isn't rocket science and already a major inprovement over the three-liner that is currently necessary. The major compiler devs won't take long to implement these cases, just as it did not take them long to implement simple yet useful functionality in previous versions of the standard. And the standard library is full with these cases of missing convenience functions over deliberately over-engineered functions.
Anyone using a recent version of Office, is using code that was written with C++20 modules.
It is relatively easy to see how far behind compiler developers are regarding even basic features.
Note that two of the three major surviving compilers are open source projects, and in all three major compilers, the big names have ramped down their contributions, as they rather invest into their own languages, seeing the current versions as good enough for existing codebases.
Badly designed library types that end up being effectively deprecated but you still need to deal with for decades because they end up in all kinds of interfaces are not.
https://en.cppreference.com/w/cpp/language/coroutines.html#c...
It does about 20 different steps with a ton of opportunities for overloading and type conversion. Insanely complicated!
And they kept up the pattern of throwing UB everywhere:
> Falling off the end of the coroutine is equivalent to co_return;, except that the behavior is undefined if no declarations of return_void can be found in the scope of Promise.
Why?? Clearly they have learnt nothing from decades of C++ bugs.
Hopefully Rust gets coroutines soon...
it doesn't look meaningfully more complex than C#'s spec (which has absolutely horrendous stuff like :throw-up-emoji: inheriting from some weird vendor type like "System.Runtime.CompilerServices.INotifyCompletion")?
https://learn.microsoft.com/en-us/dotnet/csharp/language-ref...
My personal use was clearly `async`/`await`, and this landed quite some time ago.
Coroutines in Python are fantastically useful and allow more reliable implementation of networking applications. There is a complexity cost to pay but it's small and resolves other complexity issues with using threads instead, so overall you end up with simpler code that is easier to debug. "Hello world" (e.g., with await sleep(1) to make it non-trivially async) is just a few lines.
But coroutines in C++ are so stupendously complicated I can't imagine using them in practice. The number of concepts you have to learn to write a "hello world" application is huge. Surely just using callback-on-completion style already possible with ASIO (where the callback is usually another method in the same object as the current function), which was already possible, is going to lead to simpler code, even if it's a few lines longer than with coroutines?
Edit: We have a responsibility as senior devs (those of us that are) to ensure that code isn't just something we can write but that other can read, including those that don't spend their spare time reading about obscure C++ ideas. I can't imagine who in good faith thinks that C++ coroutines fall into this category.
Boost ASIO seemed to be the first serious coroutine library for C++ and that seemed complex to use (I'm saying that as a long-time user of its traditional callback API) but that's perhaps not surprising given that it had to fit with its existing API. But then there was a library (I forget which) posted to HN that was supposed to be a clean fresh coroutine library implementation and that still seems more complex than ASIO and callbacks - it seemed like you needed to know practically every underlying C++ coroutine concept. But maybe there just needed to be time for libraries to mature a bit.
> and that seemed complex to use
Actually. I found it pretty straightforward. I switched from callbacks to coroutines un my personal project and it is a massive win! Now I can write simple loops instead of nested callbacks. Also, most state can now stay in local variables.
If you avoid synchronization, like javascript then you also don't get pre-emption or parallelism.
Well, some people would call this a problem (or downside). Many real-world programs need to access shared state or exchange data between client. This is significantly less error prone if everything happens on a single thread.
> If you avoid synchronization, like javascript then you also don't get pre-emption or parallelism.
When we are talking about networking, most of the time is spent waiting for I/O. We need concurrency, but there's typically no need for actual CPU level parallelism.
I'm not saying that we shouldn't use threads at all - on the contrary! -, but we should use them where they make sense. In some cases we can't even avoid it (e.g. audio).
A typical modern desktop application, for example, would have the UI on the main thread, all the networking on a network thread, audio on an audio thread, expensive calculations on a worker thread (pool), etc.
IMO it just doesn't make sense to complicate things by having one thread per socket when all the networking can easily be served by a single thread.
I didn’t say that. You can serve multiple sockets on a thread.
I could respond to more points. But ultimately my point is that if, for, switch etc is the kind of code you can read and debug. And async/callback is not. Async await tries to make the code look more like regular code but doesn’t succeed. I’m just advocating for actually writing normal blocking code.
A thread is exactly the right abstraction - a program flow. Synchronization is a reality of having multiple flows of execution.
I’m interested in the project mentioned in the sibling comment about virtual threads which maybe reduces the overhead (alleviating your I/O bound concern) but allows you to write this normal code.
But how would you do that with blocking I/O (which you have been suggesting)? As soon as multiple sockets are receiving data, blocking I/O requires threads.
> Async await tries to make the code look more like regular code but doesn’t succeed.
Can you be more specific? I'm personally very happy with ASIO + coroutines.
> A thread is exactly the right abstraction - a program flow.
IMO the right abstraction for concurrent program flow are suspendable and resumable functions (= coroutines) because you know exactly how the individual subprograms may interleave.
OS threads add parallelism, which means the subprograms can interleave at arbitrary points. This actually takes away control from you, which you then have to regain with critical sections, message queues, etc.
> Synchronization is a reality of having multiple flows of execution.
Depends on what kind of synchronization you're talking about. Thread synchronization is obviously only required when you have more than one thread.
when you read/write to a socket you can configure a timeout with the kernel to wait. If no data is ready, you can try another socket. The timeout can be 0
So you can serve N sockets in a while loop by checking one at a time which is ready.
> Can you be more specific? I'm personally very happy with ASIO + coroutines
1. You now have to color every function as async and there is an arbitrary boundary between them.
2. The debugger doesn’t work.
3. Because there is no pre-emption long tasks can starve others.
4. When designing an API you have to consider whether to use coroutines or not and either approach is incompatible with the other.
> Thread synchronization is obviously only required when you have more than one thread.
Higher level concept. if you have two running independent computations they must synchronize. Or they aren’t really independent (what you’re praising).
That's non-blocking I/O ;-) Except you typically use select(), poll() or epoll() to wait on multiple sockets simultaneously. The problem with that approach is obviously that you now have a state machine and need to multiplex between several sockets.
> You now have to color every function as async and there is an arbitrary boundary between them.
Not every function, only the ones you want to yield from/across. But granted, function coloring is a well-known drawback of many async/await implementations.
> 2. The debugger doesn’t work.
GDB seems to work just fine for me: I can set breakpoints, inspect local variabels, etc. I googled a bit and apparently debugging coroutine used to be terrible, but has improved a lot recently.
> 3. Because there is no pre-emption long tasks can starve others.
If you have a long running task, move it to a worker thread pool, just like you would in a GUI application (so you don't block the UI thread).
Side note: Java's virtual threads are only preempted at specific points (I/O, sleep, etc.), so they can also starve each other if you do expensive work on them.
> 4. When designing an API you have to consider whether to use coroutines or not and either approach is incompatible with the other.
Same with error handling (e.g. error codes VS exceptions). Often you can provide both styles, but it's more work for library authors. I'll give you that.
You're right, coroutines are no silver bullet and certainly have their own issues. I just found them pretty nice to work with so far.
As for how we got to here without:
1) Using large number processes/threads 2) Raw callback oriented mechanisms (with all the downsides) 3) Structured async where you pass in lambda's - benefit is you preserve the sequential structure and can have proper error handlign if you stick to the structure. Downside is you are effectively duplicating language facilities in the methods (e.g. .then(), .exception() ). Stack traces are often unreadable. I. 4) Raw use of various callback-oriented mechanisms like epoll and such, with the cost in code readability etc. and/or coupled with custom-written strategies to ease readability (so a subset of #3 really)
With C++ couroutines the benefit is you can write it almost like you usually do (line-by-line sequentially) even though it works asynchronously.