Then you top it on with `?` shortcut and the functional interface of Result and suddenly error handling becomes fun and easy to deal with, rather than just "return false" with a "TODO: figure out error handling".
SerenityOS is the first functional OS (as in "boots on actual hardware and has a GUI") I've seen that dares question the 1970s int main() using modern C++ constructs instead, and the API is simply a lot better.
I can imagine someone writing a better standard library for C++ that works a whole lot like Rust's standard library does. Begone with the archaic integer types, make use of the power your language offers!
If we're comparing C++ and Rust, I think the ease of use of enum classes/structs is probably a bigger difference. You can get pretty close, but Rust avoids a lot of boilerplate that makes them quite usable, especially when combined with the match keyword.
I think c++, the language, is ready for the modern world. However, c++, the community, seems to be struck at least 20 years in the past.
A long time ago, there was talk about a similar concept for C++ based on exception objects in a more "standard" way that could feasibly be added to the standard library, the expected<T> class. And... in C++23, std::expected does exist[1], and you don't need to use exception objects or anything awkward like that, it can work with arbitrary error types just like Result. Unfortunately, it's so horrifically late to the party that I'm not sure if C++23 will make it to critical adoption quickly enough for any major C++ library to actually adopt it, unless C++ has another massive resurgence like it did after C++11. That said, if you're writing C++ code and you want a "standard" mechanism like the Result type, it's probably the closest thing there will ever be.
Messing up error handling isn’t hard to do, so putting undefined behaviour here feels very dangerous to me, but it is the C++ way.
But as you learn to work with StatusOr you'll end up just using just ASSIGN_OR_RETURN everytime and dereferencing remains scary. I guess the complaint is that C++ won't guarantee that the execution will stop, but that's the C++ way after you drop all safety checks in `StatusOr::operator` to gain performance.
There really is no reasonable workaround here, the language needs to be amended to make this safe and ergonomic. They tried to be cheeky with some of the other APIs, like std::variant, but really the best you can do is chuck the conditional branch into a lambda (or other function-based implementation of visitors) and the ergonomics of that are pretty unimpressive.
Edit: but maybe fortune will change in the future, for anyone who still cares:
https://www.open-std.org/jtc1/sc22/wg21/docs/papers/2024/p26...
This is one odd the major reasons I switched to rust, just to escape spending my whole life worrying about bugs caused by UB.
I think the issue is that this just isn't particularly good either. If you do that, then you can't catch it like an exception, but you also can't statically verify that it won't happen.
C++ needs less of both undefined behavior and runtime errors. It needs more compile-time errors. It needs pattern matching.
(Going to moan for a bit, and I realise you aren’t responsible for the C++ standards mess!)
I have been hearing for about… 20 years now that UB gives compilers and tools the freedom to produce any error catching they like, but all it seems to have done in the main is give them the freedom to produce hard to debug crash code.
You can of course usually turn on some kind of “debug mode” in some compilers, but why not just enforce that as standard? Compilers would still be free to add a “standards non-compliant” go fast mode if they like.
I don’t think people want that as standard. The whole point of using C++ tends to be because you can do whatever you need to for the sake of performance. The language is also heavily driven by firms that need extreme performance (because otherwise why not use a higher level language)
There are knobs like stdlib assertions and ubsan, but that’s opt-in because there’s a cost to it. Part of it is also the commitment to backwards compatibility and code that compiled before should generally compile now (though there are exceptions to that unofficial rule).
Most users will do this:
1. Check if there is a value
2. Get the value
There is nothing theoretically preventing the compiler from enforcing that step 1 happens before step 2, especially if the compiler is able to combine the control flow branch with the process of conditionally getting the value. The practical issue is that there's no way to express this in C++ at all. The best you can do is the visitor pattern, which has horrible ergonomics and you can only hope it doesn't cause worse code generation too.
Some users want to do this:
1. Grab the value without checking to see if it's valid. They are sure it will be valid and can't or don't want to eat the cost of checking.
There is nothing theoretically preventing this from existing as a separate method.
I'm not a rust fanboy (seriously, check my GitHub @jchv and look at how much Rust I write, it's approximately zero) but Rust has this solved six ways through Sunday. It can do both of these cases just fine. The only caveat is that you have to wrap the latter case in an unsafe, but either way, you're not eating any costs you don't want to.
C++ can do this too. C++ has an active proposal for a feature that can fix this problem and make much more ergonomic std::variant possible, too.
https://www.open-std.org/jtc1/sc22/wg21/docs/papers/2024/p26...
Of course, this is one single microcosm in the storied history of C++ failing to adequately address the problem of undefined behavior proliferating the language, so I don't have high hopes.
Since CPUs handle such things differently whatever you define to happen means that the compiler as to insert a if to check on any CPU that doesn't work how you define it - all for something that you probably are not doing. The cost is too high in a tight loop when you know this won't even happen (but the compiler does not).
No
I think there is a solid case for the existence of undefined behavior; even Rust has it, it's nothing absurd in concept, and you do describe some reasoning for why it should probably exist.
However, and here's the real kicker, it really does not need to exist for this case. The real reason it exists for this case is due to increasingly glaring deficiencies in the C++ language, namely, again, the lack of any form of pattern matching for control flow. Because of this, there's no way for a library author, including the STL itself, to actually handle this situation succinctly.
Undefined behavior indeed should exist, but not for common cases like "oops, I didn't check to see if there was actually a value here before accessing it." Armed with a moderately sufficient programming language, the compiler can handle that. Undefined behavior should be more like "I know you (the compiler) can't know this is safe, but I already know that this unsafe thing I'm doing is actually correct, so don't generate safeguards for me; let what happens, happen." This is what modern programming languages aim to do. C++ does that for shit like basic arithmetic, and that's why we get to have the same fucking CVEs for 20+ years, over and over in an endless loop. "Just get better at programming" is a nice platitude, but it doesn't work. Even if it was possible for me to become absolutely perfect and simply just never make any mistakes ever (lol) it doesn't matter because there's no chance in hell you'll ever manage that across a meaningful segment of the industry, including the parts of the industry you depend on (like your OS, or cryptography libraries, and so on...)
And I don't think the issue is that the STL "doesn't care" about the possibility that you might accidentally do something that makes no sense. Seriously, take a look at the design of std::variant: it is pretty obvious that they wanted to design a "safe" union. In fact, what the hell would the point of designing another unsafe union be in the first place? So they go the other route. std::variant has getters that throw exceptions on bad accesses instead of undefined behavior. This is literally the exact same type of problem that std::expected has. std::expected is essentially just a special case of a type-safe union with exactly two possible values, an expected and unexpected value (though since std::variant is tagged off of types, there is the obvious caveat that std::expected isn't quite a subset of std::variant, since std::expected could have the same type for both the expected and unexpected values.)
So, what's wrong? Here's what's wrong. C++ Modules were first proposed in 2004[1]. C++20 finally introduced a version of modules and lo and behold, they mostly suck[2] and mostly aren't used by anyone (Seriously: they're not even fully supported by CMake right now.) Andrei Alexandrescu has been talking about std::expected since at least 2018[3] and it just now finally managed to get into the standard in C++23, and god knows if anyone will ever actually use it. And finally, pattern matching was originally proposed by none other than Bjarne himself (and Gabriel Dos Reis) in 2019[4] and who knows when it will make it into the standard. (I hope soon enough so it can be adopted before the heat death of the Universe, but I think that's only if we get exceptionally lucky.)
Now I'm not saying that adding new and bold features to a language as old and complex as C++ could possibly ever be easy or quick, but the pace that C++ evolves at is sometimes so slow that it's hard to come to any conclusion other than that the C++ standard and the process behind it is simply broken. It's just that simple. I don't care what changes it would take to get things moving more efficiently: it's not my job to figure that out. It doesn't matter why, either. The point is, at the end of the day, it can't take this long for features to land just for them to wind up not even being very good, and there are plenty of other programming languages that have done better with less resources.
I think it's obvious at this point that C++ will never get a handle on all of the undefined behavior; they've just introduced far too much undefined behavior all throughout the language and standard library in ways that are going to be hard to fix, especially while maintaining backwards compatibility. It should go without saying that a meaningful "safe" subset of C++ that can guarantee safety from memory errors, concurrency errors or most types of undefined behavior is simply never going to happen. Ever. It's not that it isn't possible to do, or that it's not worth doing, it's that C++ won't. (And yes, I'm aware of the attempts at this; they didn't work.)
The uncontrolled proliferation of undefined behavior is ultimately what is killing C++, and a lot of very trivial cases could be avoided, if only the language was capable of it, but it's not.
[1]: https://www.open-std.org/jtc1/sc22/wg21/docs/papers/2004/n17...
[2]: https://vector-of-bool.github.io/2019/01/27/modules-doa.html
[3]: https://www.youtube.com/watch?v=PH4WBuE1BHI
[4]: https://www.open-std.org/jtc1/sc22/wg21/docs/papers/2019/p13...
Divide by zero must be undefined behavior in any performant language. On x86 you either have a if before running the divide (which of course in some cases the compiler can optimize out, but only if it can determine the value is not zero); or you the CPU will trap into the OS - different OSes handle this in different ways, but most not in a while that makes it possible to figure out where you were and thus do something about it. This just came up in the C++ std-proposals mailing list in the past couple weeks.
AFAIK all common CPUs have the same behavior on integer overflow (two-complement). However in almost all cases (again, some encryption code is an exception) that behavior is useless to real code and so if it happens your code has a bug either way. Thus we may as well let compilers optimize assuming it cannot happen as it if it does you have a bug no matter what we define it as. (C++ is used on CPUs that are not two-complement as well, but we could call this implementation defined or unspecified, but it doesn't change that you have a bug if you invoke it.)
For std::expected - new benchmarks are proving in the real world, and with optimized exception handlers that exceptions are faster in the real world than systems that use things like expected. Microbenchmarks that show exceptions are slower are easy to create, but real world exceptions that unwind more than a couple function calls show different results.
As for modules, support is finally here and early adopters are using it. The road was long, but it is finally proving it worked.
Long roads are a good thing. C++ has avoided a lot of bad designs by spending a lot of time thinking about problems about things for a long time. Details often matter and move fast languages tend to run into problems when something doesn't work as well as they want. I'm glad C++ standardization is slow - it already is a mess without add more half backed features to the language.
I mean look, I already agree that it's not necessarily unreasonable to have undefined behavior, but this statement is purely false. You absolutely can eat your cake and have it too. Here's how:
- Split the operation in two: safe, checked division, and fast, unchecked division.
- OR, Stronger typing; a "not-zero" type that represents a numeric type where you can guarantee the value isn't zero. If you can't eat the cost of runtime checks, you can unsafely cast to this.
I think the former is a good fit for C++.
C++ does not have to do what Rust does, but for sake of argument, let's talk about it. What Rust does here is simple, it just defines divide-by-zero to panic. How? Multiple ways:
- If it knows statically it will panic, that's a compilation error.
- If it knows statically it can not be zero, it generates unchecked division.
- If it does not know statically, it generates a branch. (Though it is free to implement this however it wants; could be done using CPU exceptions/traps if they wanted.)
What if you really do need "unsafe" division? Well, that is possible, with unchecked_div. Most people do not need unchecked_div. If you think you do but you haven't benchmarked yet, you do not. It doesn't get any simpler than that. This is especially the case if you're working on modern CPUs with massive pipelines and branch predictors; a lot of these checks wind up having a very close to zero cost.
> AFAIK all common CPUs have the same behavior on integer overflow (two-complement). However in almost all cases (again, some encryption code is an exception) that behavior is useless to real code and so if it happens your code has a bug either way. Thus we may as well let compilers optimize assuming it cannot happen as it if it does you have a bug no matter what we define it as. (C++ is used on CPUs that are not two-complement as well, but we could call this implementation defined or unspecified, but it doesn't change that you have a bug if you invoke it.)
It would be better to just do checked arithmetic by default; the compiler can often statically eliminate the checks, you can opt out of them if you need performance and know what you're doing, and the cost of checks is unlikely to be noticed on modern processors.
It doesn't matter that this usually isn't a problem. It only has to be a problem once to cause a serious CVE. (Spoiler alert: it has happened more than once.)
> For std::expected - new benchmarks are proving in the real world, and with optimized exception handlers that exceptions are faster in the real world than systems that use things like expected. Microbenchmarks that show exceptions are slower are easy to create, but real world exceptions that unwind more than a couple function calls show different results.
You can always use stack unwinding or exceptions if you want to; that's also present in Rust too, in the form of panic. The nice thing about something like std::expected is that it theoretically can bridge the gap between code that uses exceptions and code that doesn't: you can catch an exception and stuff it into the `e` of an std::expected value, or you can take the `e` value of an std::expected and throw it. In theory this should not have much higher cost than simply throwing.
> As for modules, support is finally here and early adopters are using it. The road was long, but it is finally proving it worked.
Last I was at Google, they seemed to have ruled out C++ modules because as-designed they are basically guaranteed to make compilation times worse.
For CMake, you can't really rely on C++ Modules. Firstly, the Makefile generator which is default on most platforms literally does not and as far as I know will not support C++ Modules. Secondly, it doesn't support header units or importing the STL as modules. For all intents and purposes, it would be difficult to even use this for anything.
For Bazel, there is no C++ Modules support to my knowledge.
While fact-checking myself, I found this handy website:
https://arewemodulesyet.org/tools/
...which shows CMake as supporting modules, green check mark, no notes! So that really makes me wonder what value you can place on the other green checkmarks.
> Long roads are a good thing. C++ has avoided a lot of bad designs by spending a lot of time thinking about problems about things for a long time. Details often matter and move fast languages tend to run into problems when something doesn't work as well as they want. I'm glad C++ standardization is slow - it already is a mess without add more half backed features to the language.
I'm glad you are happy with the C++ standardization process. I'm not. Not only do things take many years, they're also half-baked at the end of the process. You're right that C++ still winds up with a huge mess of half-baked features even with as slow as the development process is, and modules are a great example of that.
The true answer is that the C++ committee is a fucking mess. I won't sit here and try to make that argument; plenty of people have done a damningly good job at it better than I ever could. What I will say is that C faces a lot of similar problems to C++ and somehow still manages to make better progress anyways. The failure of the C++ standard committee could be told in many different ways. A good relatively recent example is the success of the #embed directive[1]. Of course, the reason why it was successful was because it was added to C instead of C++.
Why can't C++ do that? I dunno. Ask Bjarne and friends.
There is no reason why make could not work with modules if someone wanted to go through the effort. The CMake people have even outlined what needs to be done. Ninja is so much nicer that you should switch anyway - I did more than 10 years ago.
Anyway, the Google C++ style guide has nothing to do with why C++ modules aren't and won't be used at Google, it's because as-implemented modules are not an obvious win. They can theoretically improve performance, but they can and do also make some cases worse than before.
I don't think most organizations will adopt modules at this rate. I suspect the early adopters will wind up being the only adopters for this one.
> the lack of any form of pattern matching for control flow
Growing features after the fact is hard. Look at the monumental effort to get generics into Go. Look at how even though Python 3.10 introduced the match statement, it is a statement and not an expression - you can't write `x = match ...`, unlike Rust and Java 14. So it doesn't surprise me that C++ struggles with this.
> Undefined behavior indeed should exist
Agreed. Rust throws up its hands in narrow cases ( https://doc.rust-lang.org/reference/behavior-considered-unde... ), and even Java says that calling Thread.stop() and forcing monitor unlocks can lead to corrupted data and UB.
> but not for common cases like
Yes, C/C++ have far, far too many UB cases. Even down to idiotically simple things like "failing to end a source file with newline". C and C++ have liberally sprinkled UB as a cop-out like no other language.
> C++ does that for shit like basic arithmetic
I spent an unhealthy amount of time understanding the rules of integer types and arithmetic in C/C++. Other languages like Rust are as capable without the extreme mental complexity. https://www.nayuki.io/page/summary-of-c-cpp-integer-rules
Oh and, `(uint16_t)0xFFFF * (uint16_t)0xFFFF` will cause a signed 32-bit integer overflow on most platforms, and that is UB and will eat your baby. Scared yet? C/C++ rules are batshit insane.
> "Just get better at programming" is a nice platitude, but it doesn't work.
Correct. Far too often, I hear a conversation like "C/C++ have too many UB, why can't we make it safer?" "Just learn to write better code, dumbass". No, literal decades of watching the industry tells us that the same mistakes keep happening over and over again. The evidence is overwhelming that the languages need to change, not the programmers.
> it's obvious at this point that C++ will never get a handle on all of the undefined behavior; they've just introduced far too much undefined behavior all throughout the language and standard library
True.
> in ways that are going to be hard to fix, especially while maintaining backwards compatibility
Technically not true. Specifying undefined behavior is easy, and this has already been done in many ways. For example, -fwrapv makes signed overflow defined to wrap around. For example, you could zero-initialize every local variable and change malloc() to behave like calloc(), so that reading uninitialized memory always returns zero. And because the previous behavior was undefined anyway, literally any substitute behavior is valid.
The problem isn't maintaining backward compatibility, it's maintaining performance compatibility. Allegedly, undefined behavior allows the compiler to optimize out redundant arithmetic, redundant null checks, etc. I believe this is what stops the standards committees from simply defining some kind of reasonable behavior for what is currently considered UB.
> a meaningful "safe" subset of C++ that can guarantee safety from memory errors, concurrency errors or most types of undefined behavior is simply never going to happen
I think it has already happened. Fil-C seems like a capable approach to transpile C/C++ and add a managed runtime - and without much overhead. https://github.com/pizlonator/llvm-project-deluge/blob/delug...
> The uncontrolled proliferation of undefined behavior is ultimately what is killing C++
It's death by a thousand cuts, and it hurts language learners the most. I can write C and C++ code without UB, but it took me a long time to get there - with a lot of education and practice. And UB-free code can be awkward to write. The worst part of it is that the knowledge is very C/C++-specific and is useless in other languages because they don't have those classes of UB to begin with.
I dabbled in C++ programming for about 10 years before I discovered Rust. Once I wrote my first few Rust programs, I was hooked. Suddenly, I stopped worrying about all the stupid complexities and language minutiae of C++. Rust just made sense out of the box. It provided far fewer ways to do things ( https://www.nayuki.io/page/near-duplicate-features-of-cplusp... ), and the easy way is usually the safe and correct way.
To me, Rust is C++ done right. It has the expressive power and compactness of C++ but almost none of the downsides. It is the true intellectual successor to C++. C++ needs to hurry up and die already.
Good point. A language that gets updated by adding a lot of features is DIVERGING from a community that has mostly people that still use a lot of the C baggage in C++, and only a few folks that use a lot of template abstraction at the other end of the spectrum.
Since in larger systems, you will want to re-use a lot of code via open source libraries, one is inevitably stuck in not just one past, but several versions of older C++, depending on when the code to be re-used was written, what C++ standard was stable enough then, and whether or not the author adopted what part of it.
Not to speak of paradigm choice to be made (object oriented versus functional versus generic programmic w/ templates).
It's easier to have, like Rust offers it, a single way of doing things properly. (But what I miss in Rust is a single streamlined standard library - organized class library - like Java has had it from early days on, it instead feels like "a pile of crates").
edit: what's with people downvoting a straight fact?
The public existence of Rust is 13 years, during which computing has not changed that much to be honest. Now compare this to the prehistory that is 1985, when CFront came out, already made for backwards compatibility with C.
The memory model, interrupt model, packetized networking, digital storage, all function more or less identically.
In embedded, I still see Z80s and M68ks like nothing's changed.
I'd love to see more concrete implementations of adiabatic circuits, weird architectures like the mill, integrated FPGAs, etc. HP's The Machine effort was a rare exciting new thing until they walked back all the exciting parts. CXL seems like about the most interesting new thing in a bit.
0. https://en.wikipedia.org/wiki/Floating-point_unit
1. https://www.tindie.com/products/jurassicomp/68882-fpu-card-f...
Meaning that all the machines I've ever cared about have had 8 bit bytes. The TI-99/4A, TRS-80, Commodore 64 and 128, Tandy 1000 8088, Apple ][, Macintosh Classic, etc.
Many were launched in the late 70s. By 1985 we were well into the era of PC compatibles.
My https://en.wikipedia.org/wiki/Tandy_1000 came out in 1984. And it was a relatively late entry to the market, it was near peak 8088 with what was considered high end graphics and sound for the day, far better than the IBM PC which debuted in 1981 and only lasted until 1987.
So that would be Rust 1.0, released in 2015, not 2006, putting it down to a decade.
And the point still stands when looking at any long enough ecosystem still in use, with strong backwards compatibility, not only the language, the whole ecosystem, eventually editions alone won't make it, and just like those languages, Rust will gain its own warts.
> eventually editions alone won't make it, and just like those languages, Rust will gain its own warts.
That's possible. Though C++ hasn't had editions, or the HLIR / MIR separation, the increased strictness, wonderful tooling, or the benefit of learning from the mistakes made with C++. Noting that, it seems reasonable to expect Rust to collect less cruft and paint itself into fewer corners over a similar period of time. Since C++ has been going for 36 years, it seems Rust will outlive me. Past that, I'm not sure I care.
IDEs are wonderful tooling, maybe people should get their heads outside UNIX CLIs and MS-DOS like TUIs.
Then there is the whole ecosystem of libraries, books, SDKs and industry standards.
Who are you speaking to who hasn't explored all those things in depth?
I see Rust's restrictions as a huge advantage over C++ here. Even with respect to editions. Rust has always given me the impression of a language designed from the start to be approximately what C++ is today, without the cruft, in which safety is opt-out, not opt-in. And the restrictions seem more likely to preserve that than not.
C/C++ folks seem to see Rust's restrictions as anti-features without realizing that C/C++'s lack of restriction resulted in the situation they have today.
I only maintain a few projects in each language, so I haven't run into every sort of issue for either, but that's very much how it feels to me still, several years and several projects in.
I agree that Rust is designed to be like C++ is today, without the cruft, except all languages if they survive long enough in the market, beyond the adoption curve, they will eventually get their own cruft.
Not realizing this, will only make that 30 years from now, if current languages haven't yet been fully replaced by AI based tools, there will be that language designed to be like Rust is in 30 years, but without the cruft.
The strength of C++ code today is on the ecosystem, that is why we reach for it, having to write CUDA, DirectX, maybe dive into the innards of Java, CLR, V8, GCC, LLVM, doing HPC with OpenAAC, OpenMP, MPI, Metal, Unreal, Godot, Unity.
Likewise I don't reach for C for fun, the less the merrier, rather POSIX, OpenGL, Vulkan,....
Well I'm not them. I'm just a regular old software developer.
> The strength of C++ code today is on the ecosystem
Ecosystem is why I jumped ship from C++ to Rust. The difference in difficulty integrating a random library into my project is night and day. What might take a week or a month in C++ (integrating disparate build systems, establishing types and lifetimes of library objects and function calls, etc) takes me 20 minutes in Rust. And in general I find the libraries to be much smaller, more modular, and easier to consume piecemeal rather than a whole BOOST or QT at a time.
And while the Rust libraries are younger, I find them to be more stable, and often more featureful and with better code coverage. The language seems to lend itself to completionism.
People choose C++ because it's a flexible language that lets you do whatever you want. Meanwhile Rust is a constrained and opinionated thing that only works if you do things "the right way".
You went on a bit too long. C++ lets you do whatever. Whether you wanted that is not its concern. That's handily illustrated in Matt Godbolt's talk - you provided a floating point value but that's inappropriate? Whatever. Negative values for unsigned? Whatever.
This has terrible ergonomics and the consequences were entirely predictable.
However it seems like C++ wants to only provide this kind of pattern via monadic operations.
You can imitate the beginner experience of the ? operator as magically handling trivial error cases by "just knowing" what should happen, but it's not the same thing as the current Try feature.
Barry Revzin has a proposal for some future C++ (lets say C++ 29) to introduce statement expressions, the syntax is very ugly even by C++ standards but it would semantically solve the problem you had.
This isn't really true since Rust has panics. It would be nice to have out-of-the-box support for a "no panics" subset of Rust, which would also make it easier to properly support linear (no auto-drop) types.
You can configure your lints in your workspace-level Cargo.toml (the folder of crates)
“””
[workspace.lints.clippy]
pedantic = { level = "warn", priority = -1 }
# arithmetic_side_effects = "deny"
unwrap_used = "deny"
expect_used = "deny"
panic = "deny"
“””
then in your crate Cargo.toml “””
[lints]
workspace = true
“””
Then you can’t even compile the code without proper error handling. Combine that with thiserror or anyhow with the backtrace feature and you can yeet errors with “?” operators or match on em, map_err, map_or_else, ignore them, etc
[1] https://rust-lang.github.io/rust-clippy/master/index.html#un...
Not saying there aren't applications where using these lints couldn't be alright (web servers maybe), but at least in my experiences (mostly doing CLI, graphics, and embedded stuff) trying to keep the program alive leads to more problems than less.
It's totally normal practice for a library to have this as a standard.
Like
this
> Text after a blank line that is indented by two or more spaces is reproduced verbatim. (This is intended for code.)
Even then, though, I do see a need to catch panics in some situations: if I'm writing some sort of API or web service, and there's some inconsistency in a particular request (even if it's because of a bug I've written), I probably really would prefer only that request to abort, not for the entire process to be torn down, terminating any other in-flight requests that might be just fine.
But otherwise, you really should just not be catching panics at all.
This is a sign you are writing an operating system instead of using one. Your web server should be handling requests from a pool of processes - so that you get real memory isolation and can crash when there is a problem.
If there was a special case that would not work, then the design dictates that requests are not independent and there must be risk of interference (they are in the same process!)
What I definitely do not want is a bug ridden “crashable async sub task” system built in my web program.
you’re vastly over estimating the overhead of processes and number of simultaneous web connections.
> only to gain a minor amount of safety
What you’re telling me is performance (memory?) is such a high priority you’re willing to make correctness and security tradeoffs.
And I’m saying thats ok, one of those is crashing might bring down more than one request.
> one that is virtually irrelevant in a memory safe language
Your memory safe language uses C libraries in its process.
Memory safe languages have bugs all the time. The attack surface is every line of your program and runtime.
Memory is only one kind of resource and privilege. Process isolation is key for managing resource access - for example file descriptors.
Chrome is a case study if these principles. Everybody thought isolating JS and HTML pages should be easy - nobody could get it right and chrome instead wrapped each page in a process.
It's less the actual overhead of the process but the savings you get from sharing. You can reuse database connections, have in-memory caches, in-memory rate limits and various other things. You can use shared memory which is very difficult to manage or an additional common process, but either way you are effectively back to square one with regards to shared state that can be corrupted.
I just said one of the costs of those saving is crashing may bring down multiple requests - and you should design with that trade off.
Handling thousands of concurrent requests is table stakes for a simple web server. Handling thousands of concurrent processes is beyond most OSs. The context switching overhead alone would consume much of the CPU of the system. Even hundreds of processes will mean a good fraction of the CPU being spent solely on context switching - which is a terrible place to be.
It works fine on Linux - the operating system for the internet. Have you tried it?
> good fraction of the CPU being spent solely on context switching
I was waiting for this one. Threads and processes do the same amount of context switching. The overhead of processes switch is a little higher. The main cost is memory.
Yes, therefore real webservers use a limited amount of threads/processes (in the same ballpark as a number of CPU cores). Modern approach is to use green threads which are really cheap to switch, it is like store registers, read registers and jmp.
> The main cost is memory.
The main cost is scheduling, not switching per se. Preemptive multitasking needs to deal with priorities to not waste time, and algorithms that do it are O(N) mostly. All these O(N) calculations needs to be completed multiple times per second, the higher the frequency of switching the more work to do. When you have thousands of processes it is the main cost. If you have tens of thousands it starts to bite hard.
Unfortunately even the Rust core language doesn't treat them this way.
I think it's arguably the single biggest design mistake in the Rust language. It prevents a ton of useful stuff like temporarily moving out of mutable references.
They've done a shockingly good job with the language overall, but this is definitely a wart.
you could have a panic though, if you wrongly make assumptions
Panic is absolutely fine for bugs, and it's indeed what should happen when code is buggy. That's because buggy code can make absolutely no guarantees on whether it is okay to continue (arbitrary data structures may be corrupted for instance)
Indeed it's hard to "treat an error" when the error means code is buggy. Because you can rarely do anything meaningful about that.
This is of course a problem for code that can't be interrupted.. which include the Linux kernel (they note the bug, but continue anyway) and embedded systems.
Note that if panic=unwind you have the opportunity to catch the panic. This is usually done by systems that process multiple unrelated requests in the same program: in this case it's okay if only one such request will be aborted (in HTTP, it would return a 5xx error), provided you manually verify that no data structure shared by requests would possibly get corrupted. If you do one thread per request, Rust does this automatically; if you have a smaller threadpool with an async runtime, then the runtime need to catch panics for this to work.
And now your language has exceptions - which break control flow and make reasoning about a program very difficult - and hard to optimize for a compiler.
Panics are a runtime memory safe way to encode an invariant, but I will generally prefer a compile time invariant if possible and not too cumbersome.
However, yes I will panic if I'm not already using unsafe and I can clearly prove the invariant I'm working with.
I will fight against program aborts as hard as I possibly can. I don't mind boilerplate to be the price paid and will provide detailed error messages even in such obscure error branches.
Again, speaking only for myself. My philosophy is: the program is no good for me dead.
That may be true, but the program may actually be bad for you if it does something unexpected due to an unforeseen state.
# Errors
`foo` returns an error called `UnspecifiedError`, but this only
happens when an anticipated bug in the implementation occurs. Since
there are no known such bugs, this API never returns an error. If
an error is ever returned, then that is proof that there is a bug
in the implementation. This error should be rendered differently
to end users to make it clear they've hit a bug and not just a
normal error condition.
Imagine if I designed `regex`'s API like this. What a shit show that would be.If you want a less flippant take down of this idea and a more complete description of my position, please see: https://burntsushi.net/unwrap/
> Honestly, I don't think libraries should ever panic. Just return an UnspecifiedError with some sort of string.
The latter is not a solution to the former. The latter is a solution to libraries having panicking branches. But panics or other logically incorrect behavior can still occur as a result of bugs.
Return an AllocationError. Rust unfortunately picked the wrong default here for the sake of convenience, along with the default of assuming a global allocator. It's now trying to add in explicit allocators and allocation failure handling (A:Allocator type param) at the cost of splitting the ecosystem (all third-party code, including parts of libstd itself like std::io::Read::read_to_end, only work with A=GlobalAlloc).
Zig for example does it right by having explicit allocators from the start, plus good support for having the allocator outside the type (ArrayList vs ArrayListUnmanaged) so that multiple values within a composite type can all use the same allocator.
>Also many functions use addition and what is one supposed to do in case of overflow?
Return an error ( https://doc.rust-lang.org/stable/std/primitive.i64.html#meth... ) or a signal that overflow occurred ( https://doc.rust-lang.org/stable/std/primitive.i64.html#meth... ). Or use wrapping addition ( https://doc.rust-lang.org/stable/std/primitive.i64.html#meth... ) if that was intended.
Note that for the checked case, it is possible to have a newtype wrapper that impls std::ops::Add etc, so that you can continue using the compact `+` etc instead of the cumbersome `.checked_add(...)` etc. For the wrapping case libstd already has such a newtype: std::num::Wrapping.
Also, there is a clippy lint for disallowing `+` etc ( https://rust-lang.github.io/rust-clippy/master/index.html#ar... ), though I assume only the most masochistic people enable it. I actually tried to enable it once for some parsing code where I wanted to enforce checked arithmetic, but it pointlessly triggered on my Checked wrapper (as described in the previous paragraph) so I ended up disabling it.
Rust picked the right default for applications that run in an OS whereas Zig picked the right default for embedded. Both are good for their respective domains, neither is good at both domains. Zig's choice is verbose and useless on a typical desktop OS, especially with overcommit, whereas Rust's choice is problematic for embedded where things just work differently.
My current $dayjob involves a "server" application that needs to run in a strict memory limit. We had to write our own allocator and collections because the default ones' insistence on using GlobalAlloc infallibly doesn't work for us.
Thinking that only "embedded" cares about custom allocators is just naive.
I said absolutely no such thing? In my $dayjob working on graphics I, too, have used custom allocators for various things, primarily in C++ though, not Rust. But that in no way makes the default of a global allocator wrong, and often those custom allocators have specialized constraints that you can exploit with custom containers, too, so it's not like you'd be reaching for the stdlib versions probably anyway.
As a video game developer, I've found the case for custom general-purpose allocators pretty weak in practice. It's exceedingly rare that you really want complicated nonlinear data structures, such as hash maps, to use a bump-allocator. One rehash and your fixed size arena blows up completely.
95% of use cases are covered by reusing flat data structures (`Vec`, `BinaryHeap`, etc.) between frames.
Requests are matched against the smallest tier that can satisfy them (static tiers before dynamic). If no tier can satisfy it (static tiers are too small or empty, dynamic tier's "remaining" count is too low), then that's an allocation failure and handled by the caller accordingly. Eg if the request was for the initial buffer for accepting a client connection, the client is disconnected.
When a buffer is returned to the allocator it's matched up to the tier it came from - if it came from a static tier it's placed back in that tier's list, if it came from the dynamic tier it's free()d and the tier's used counter is decremented.
Buffers have a simple API similar to the bytes crate - "owned buffers" allow &mut access, "shared buffers" provide only & access and cloning them just increments a refcount, owned buffers can be split into smaller owned buffers or frozen into shared buffers, etc.
The allocator also has an API to query its usage as an aggregate percentage, which can be used to do things like proactively perform backpressure on new connections (reject them and let them retry later or connect to a different server) when the pool is above a threshold while continuing to service existing connections without a threshold.
The allocator can also be configured to allocate using `mmap(tempfile)` instead of malloc, because some parts of the server store small, infrequently-used data, so they can take the hit of storing their data "on disk", ie paged out of RAM, to leave RAM available for everything else. (We can't rely on the presence of a swapfile so there's no guarantee that regular memory will be able to be paged out.)
As for crates.io, there is no option. We need local allocators because different parts of the server use different instances of the above allocator with different tier configs. Stable Rust only supports replacing GlobalAlloc; everything to do with local allocators is unstable, and we don't intend to switch to nightly just for this. Also FWIW our allocator has both a sync and async API for allocation (some of the allocator instances are expected to run at capacity most of the time, so async allocation with a timeout provides some slack and backpressure as opposed to rejecting requests synchronously and causing churn), so it won't completely line up with std::alloc::Allocator even if/when that does get stabilized. (But the async allocation is used in a localized part of the server so we might consider having both an Allocator impl and the async direct API.)
And so because we need local allocators, we had to write our own replacements of Vec, Queue, Box, Arc, etc because the API for using custom A with them is also unstable.
Odin has them, too, optionally (and usually).
I partially disagree with this. Using Zig style allocators doesn't really fit with Rust ergonomics, as it would require pretty extensive lifetime annotations. With no_std, you absolutely can roll your own allocation styles, at the price of more manual lifetime annotations.
I do hope though that some library comes along that allows for Zig style collections, with the associated lifetimes... (It's been a bit painful rolling my own local allocator for audio processing).
As long as the type is generic on the allocator, the lifetimes of the allocator don't appear in the type. So eg if your allocator is using a stack array in main then your allocator happens to be backed by `&'a [MaybeUninit<u8>]`, but things like Vec<T, A> instantiated with A = YourAllocator<'a> don't need to be concerned with 'a themselves.
Eg: https://play.rust-lang.org/?version=nightly&mode=debug&editi... do_something_with doesn't need to have any lifetimes from the allocator.
If by Zig-style allocators you specifically mean type-erased allocators, as a way to not have to parameterize everything on A:Allocator, then yes the equivalent in Rust would be a &'a dyn Allocator that has an infectious 'a lifetime parameter instead. Given the choice between an infectious type parameter and infectious lifetime parameter I'd take the former.
I guess all that to say, I agree then that this should've been in std from day one.
Going from panic to panic free in Rust is as simple as choosing 'function' vs 'try_function'. The actual mistakes in Rust were the ones where the non-try version should have produced a panic by default. Adding Box::try_new next to Box::new is easy.
There are only two major applications of panic free code in Rust: critical sections inside mutexes and unsafe code (because panic safety is harder to write than panic free code). In almost every other case it is far more fruitful to use fuzzing and model checking to explicitly look for panics.
You abandon the current activity and bubble up the error to a stage where that effort can be tossed out or retried sometime later. i.e. Use the same error handling approach you would have to use for any other unreliable operation like networking.
Well on Linux they are apparently supposed to return memory anyway and at some point in the future possibly SEGV your process when you happen to dereference some unrelated pointer.
They require overcommit just to open an empty window.
Assuming that you are not using much recursion, you can eliminate most of the heap related memory panics by adding limited reservation checks for dynamic data, which is allocated based on user input/external data. You should also use statically sized types whennever possible. They are also faster.
Honestly this is where you'd throw an exception. It's a shame Rust refuses to have them, they are absolutely perfect for things like this...
The only place where it would be different is if you explicitly set panics to abort instead of unwind, but that's not default behavior.
But for arithmetics Rust has non-aborting bound checking API, if my memory serves.
And that's what I'm trying hard to do in my Rust code f.ex. don't frivolously use `unwrap` or `expect`, ever. And just generally try hard to never use an API that can crash. You can write a few error branches that might never get triggered. It's not the end of the world.
Rust also provides Wrapping and Saturating wrapper types for these integers, which wrap (255 + 1 == 0) or saturate (255 + 1 == 255). Depending on your CPU either or both of these might just be "how the computer works anyway" and will accordingly be very fast. Neither of them is how humans normally think about arithmetic.
Furthermore, Rust also provides operations which do all of the above, as well as the more fundamental "with carry" type operations where you get two results from the operation and must write your algorithms accordingly, and explicitly fallible operations where if you would overflow your operation reports that it did not succeed.
Of course, just like with opening files or integer arithmetic, if you don't pay any attention to handling the errors up front when writing your code, it can be an onerous if not impossible to task to refactor things after the fact.
I was approaching these problems strictly from the point of view of what can Rust do today really, nothing else. To me having checked and non-panicking API for integer overflows / underflows at least gives you some agency.
If you don't have memory, well, usually you are cooked. Though one area where Rust can become even better there is to give us some API to reserve more memory upfront, maybe? Or I don't know, maybe adopt some of the memory-arena crates in stdlib.
But yeah, agreed. Not the types of problems I want to have anymore (because I did have them in the past).
Also there is the no_panic crate, which uses macros to require the compiler to prove that a given function cannot panic.
You see this whenever you use cargo test. If a single test panics, it doesn’t abort the whole program. The panic is “caught”. It still runs all the other tests and reports the failure.
Although as a library vendor, you kind have to assume your library could be compiled into an app configured with panic=abort, in which case it will not do that
But on those places, you better know exactly what you are doing.
not sure what the latest is in the space, if I recall there are some subtleties
It would be annoying to use - as you say, you couldn’t even add regular numbers together or index into an array in nopanic code. But there are ways to work around it (like the wrapping types).
One problem is that implicit nopanic would add a new way to break semver compatibility in APIs. Eg, imagine a public api that just happens to not be able to panic. If the code is changed subtly, it could easily start panicing again. That could break callers, so it has to be a major version bump. You’d probably have to require explicit nopanic at api boundaries. (Else assume all public functions from other crates can panic). And because of that, public APIs like std would need to be plastered with nopanic markers everywhere. It’s also not clear how that works through trait impls.
As far as I can tell, no_std doesn't change anything with regard to either the usability of panicking operators like integer division, slice indexing, etc. (they're still usable) nor on whether they panic on invalid input (they still do).
So, while this is an improvement over C++ (and that is not saying much at all), it's still implemented in a pretty clumsy way.
Doing error handling properly is hard, but it's a lot harder when error types lose information (integer/bool returns) or you can't really tell what errors you might get (exceptions, except for checked exceptions which have their own issues).
Sometimes error handling comes down to "tell the user", where all that info is not ideal. It's too verbose, and that's when you need anyhow.
In other cases where you need details, anyhow is terrible. Instead you want something like thiserror, or just roll your own error type. Then you keep a lot more information, which might allow for better handling. (HttpError or IoError - try a different server? ParseError - maybe a different parse format? etc.)
So I'm not sure it's that Result is clumsy, so much that there are a lot of ways to handle errors. So you have to pick a library to match your use case. That seems acceptable to me?
FWIW, errors not propagating via `?` is entirely a problem on the error type being propagated to. And `?` in closures does work, occasionally with some type annotating required.
As you say, it’s not “batteries included”. I think that’s a fine answer given rust is a systems language. But in application code I want batteries to be included. I don’t want to need to opt in to the right 3rd party library.
I think rust could learn a thing or two from Swift here. Swift’s equivalent is better thought through. Result is more part of the language, and less just bolted on:
https://docs.swift.org/swift-book/documentation/the-swift-pr...
If you use `anyhow`, then all you know is that the function may `Err`, but you do not know how - this is no better than calling a function that may `throw` any kind of `Throwable`. Not saying it's bad, it is just not that much different from the error handling in Kotlin or C#.
Better than C, sufficient in most cases if you're writing an app, to be avoided if you're writing a lib. There are alternatives such as `snafu` or `thiserror` that are better if you need to actually catch the error.
Initial proof of concepts just get panics (usually with a message).
Then functions start to be fallible, by adding anyhow & considering all errors to still be fatal, but at least nicely report backtraces (or other things! context doesn't have to just be a message)
Then if a project is around long enough, swap anyhow to thiserror to express what failure modes a function has.
Whereas going with "I probably want to retry a few times" is guessing that most of your problems are the common case, but you're not entirely sure the platform you're on will emit non-commoncases with sane semantics.
[1] https://doc.rust-lang.org/std/panic/fn.catch_unwind.html
Back at Google, it was truly an error handling nirvana because they had StatusOr which makes sure that the error type is just Status, a standardized company-wide type that stills allows significant custom errors that map to standardized error categories.
In any case I will take Rust Result over C++ mess at any time especially given that we have two C++, one with exception support and one without making code incompatible between two.
I use completely custom error handling stacks in C++ and they are quite slick these days, thanks to improvements in the language.
With Rust Result and powerful macros it easier to implement.
I like rust, but its not as clean in practice, as you describe
type Result<T> = result::Result<T, MyError>;
#[derive(Debug)]
enum MyError {
IOError(String)
// ...
}
Your owned (i.e. not third-party) Error type is a sum type of error types that might be thrown by other libraries, with a newtype wrapper (`IOError`) on top.Then implement the `From` trait to map errors from third-party libraries to your own custom Error space:
impl From<io::Error> for MyError {
fn from(e: io::Error) -> MyError {
MyError::IOError(e.to_string())
}
}
Now you can convert any result into a single type that you control by transforming the errors: return sender
.write_all(msg.as_bytes())
.map_err(|e| e.into());
There is a little boilerplate and mapping between error spaces that is required but I don't find it that onerous.I would rather have what OCaml has: https://ocaml.org/docs/error-handling.
That said, I'd prefer to be working in Rust. The C++ code we call into can just raise exceptions anywhere implicitly; there are a hell of a lot of things you can accidentally do wrong without warning; class/method syntax is excessively verbose, etc.
It's obviously subjective in many ways. However, what I dislike the most is that try/except hides the error path from me when I'm reading code. Decades of trying to figure out why that stacktrace is happening in production suddenly has given me a strong dislike for that path being hidden from me when I'm writing my code.
It could be some kind of an exception check thing, where you would either have to make sure that you handle the error locally somehow, or propagate it upwards. Sadly programming is not ready for such ideas yet.
---
I jest, but this is exactly what checked exceptions are for. And the irony of stuff like Rust's use of `Result<T, E>` and similarly ML-ey stuff is that in practice they end up with what are essentially just checked exceptions, except with the error type information being somewhere else.
Of course, people might argue that checked exceptions suck because they've seen the way Java has handled them, but like... that's Java. And I'm sorry, but Java isn't the definition of how checked exceptions can work. But because of Java having "tainted" the idea, it's not explored any further, because we instead just assume that it's bad by construction and then end up doing the same thing anyway, only slightly different.
Nim has a good take on exception tracking that's elegant, performant, and only on request (unlike Java's attempt).
The key phrase you're looking for is "algebraic effect systems". Right now they're a pretty esoteric thing only really seen in PL research, but at one point so was most of the stuff we now take for granted in Rust. Maybe someday they'll make their way to mainstream languages in an ergonomic way.
it can look just like a more-efficient `except` clauses with all the safety, clarity, and convenience that enums provide.
Here's an example:
* Implementing an error type with enums: https://git.deuxfleurs.fr/Deuxfleurs/garage/src/branch/main/... * Which derives from a more general error type with even more helpful enums: https://git.deuxfleurs.fr/Deuxfleurs/garage/src/branch/main/... * then some straightforward handling of the error: https://git.deuxfleurs.fr/Deuxfleurs/garage/src/branch/main/...
Unless you specifically want the ‘?’ operator, you can get pretty close to this with some clever use of templates and operator overloading.
If universal function call syntax becomes standardized, this will look even more functional and elegant.
And those 'special' stdlib types wouldn't be half as useful without supporting language syntax, so why not go the full way and just implement everything in the language?
You might add syntactic sugar on top, but you don't want these kinds of things in your fundamental language definition.
I could of course create my own type for this, but then it won’t work with the ? operator.
match x(y) {
Ok(None) => "not found".into(),
Ok(Some(x)) => x,
Err(e) => handle_error(e),
}
Because of pattern matching, I often also have one arm for specific errors to handle them specifically in the same way as the ok branches above.This is what the Try[^1] trait is aiming to solve, but it's not stabilized yet.
[^1]: https://rust-lang.github.io/rfcs/3058-try-trait-v2.html
I could imagine situations where an empty return value would constitute an Error, but in 99% of cases returning None would be better.
Result<Option> may feel clunky, but if I can give one recommendation when it comes to Rust, is that you should not value your own code-aesthetical feelings too much as it will lead to a lot of pain in many cases — work with the grain of the language not against it even if the result does not satisfy you. In this case I'd highly recommend just using Result<Option> and stop worrying about it.
You being able to compose/nest those base types and unwraping or matching them in different sections of your code is a strength not a weakness.
Just need a function that allows lifting option to result.
after all, if a library exposes too many functions to you, it isn't a good library.
what good is it for me to have a result type if i have to call 27 functions with 27 different result types just to rotate a cube?
But I hear compiling is too slow.
Is it a serious problem in practice?
Besides developer productivity it can be an issue when you need a critical fix to go out quickly and your pipelines take 60+ minutes.
With many other build systems I'd be hesitant to do that, but since Cargo is very good about what to rebuild for incremental builds, keeping the cache around is a huge speed boost.
With some exceptions for core data structures, it seems that if you only modified a few files in a large project the total compilation time would be quick no matter how slow the compiler was.
Rust compile times have been improving over time as the compiler gets incrementally rewritten and optimised.
Fast ability to quickly test and get feedback is mana from the gods in software development. Organizations should keep it right below customer satisfaction and growth as a driving metric.
Like with any bigger C++ project there's like 3 build tools, two different packaging systems and likely one or even multiple code generators.
Cargo also has good caching out of the box. While cargo is not the best build system, it's an easy to use good system, so you generally get good compile times for development when you edit just one file. This is along made heavy use of with docker workflows like cargo-chef.
Is it a serious problem? I'd say 'no', but YMMV.
Granted, there aren't any Rust projects that large yet, but I feel like compilation speeds are something that can be worked around with tooling (distributed build farms, etc.). C++'s lack of safety and a proclivity for "use after free" errors is harder to fix.
In my experience, a lot of the code is dedicated to "correctly transforming between different Result / Error types".
Much more verbose than exceptions, despite most of the time pretending they're just exceptions (i.e. the `?` operator).
Why not just implement exceptions instead?
(TBH I fully expect this comment to be downvoted, then Rust to implement exceptions in 10 years... Something similar happened when I suggested generics in Go.)
Being blind to the alternative, and mostly authoring lower level libraries, what's the benefit of not having exceptions? I understand how they're completely inappropriate for an OS, a realtime system, etc, but what about the rest? Or is that the problem: once you have the concept, you've polluted everything?
I really wish java used `?` as a shorthand to declare and propagate checked exceptions of called function.
https://github.com/abseil/abseil-cpp/blob/master/absl/status...
Why can't I return an integer on error? What's preventing me from writing Rust like C++?
For instance, a common example of the "integer on error" pattern in other languages is `array.index_of(element)`, returning a non-negative index if found or a negative value if not found. In Rust, the return type of `Iterator::position` is instead `Option<usize>`. You can't accidentally forget to check whether it's present. You could still write your own `index_of(&self, element: &T) -> isize /* negative if not found */` if that's your preference.
https://doc.rust-lang.org/std/iter/trait.Iterator.html#metho...
Two things are incredibly frustrating when it comes to safety in software engineering:
1. The arrogance that "practitioners" have against "theorists" (everyone with a PhD in programming languages)
2. The slowness of the adoption of well-tested and thoroughly researched language concepts (think of Haskell type classes, aka, Rust traits)
I like that Rust can pick good concepts and design coherent language from them without inventing its own "pragmatic" solution that breaks horribly in some use cases that some "practitioners" deem "too theoretical."
Unless you are doing embedded programming ...
You target the compiler your client uses for their platform. There is very little choice there.
I've thought Rust picked some pretty nifty middle ground. On one side, it's not mindfucking unsafe like C. It picked to remove a set of problems like memory safety. On the other side, Rust didn't go for the highest of theoretical grounds. It's not guaranteeing much outside of it, and it also relies a bit on human help (unsafe blocks).
It takes ADT, but not function currying, and so on.
I think that's a good thing. A curried function is a function that takes one argument, and returns a closure that captures the argument. That closure might itself take another argument, and return a new closure that captures both the original argument and the second one. And so on ad infinitum. How ownership and borrowing works across that chain of closures could easily become a touchy issue, so you probably want to be making it as explicit as possible.
Or perhaps better yet, find an easier way to accomplish the same task. Maybe use a struct to explicitly carry the arguments along until you're ready to call the function.
I'd love to have a syntax like
{ foo(%1, bar) }
standing for |x| { foo(x, bar) }
though. I'm not aware of any language that has this!> for_each(a.begin(), a.end(), std::cout << _1 << ' ');
Also, Scala has something similar as a first class feature[2].
[1]: https://www.boost.org/doc/libs/1_88_0/doc/html/lambda.html
[2]: https://scala-lang.org/files/archive/spec/3.4/06-expressions...
In[1]: Map((1 + #1)&, {a, b, c})
Out[1]: {a + 1, b + 1, c + 1}
See https://reference.wolfram.com/language/ref/Function.html.en for the full story.Rust is also struggling with its “too theoretical” concepts by the way. The attempts of the community to gaslight the practitioners that the concepts are in fact easy to learn and straightforward are only enjoying mild success, if I may call it that.
I wouldn't trust opinion of practitioners. Not after they have chosen javascript and, God forbid, PHP. Practitioners choose not what is inherently good, but what is popular. It is a very practical choice that brings a lot of benefits, so it is just practitioners being practitioners. It can be good or bad, I don't care now, it doesn't matter for my argument. The issue that a good thing can be overlooked by practitioners for decades, because there is nothing popular giving them this thing.
I don’t think what features are popular in C++ is good indication of anything. The language is good only due to the insane amounts of investment to the ecosystem, not because of the language features due to design.
For an industrial language inventory of ”nice features to have” F# and C# are mostly my personal gold standard.
”Too theoretical” is IMO not the correct lens to use. I would propose as a better lens a) which patterns you often use b) how to implement them in language design itself.
A case in point is the gang-of-four book. It mostly gives names to things in C++ that are language features in better languages.
(Named parameters would definitely be great, though. I use little structs of parameters where I think that's useful, and set their members one line at a time.)
I know that this is an extremist view, but: I feel the same way about Rust's borrow checker. I just very rarely have problems with memory errors in C++ code bases with a little thought applied to lifetimes and use of smart pointers. Certainly, lifetime bugs are massively overshadowed by logic and algorithmic bugs. Why would I want to totally reshape the way that I code in order to fix one of the least significant problems I encounter? I actually wish there were a variant of Rust with all its nice clean improvements over C++ except for lifetime annotations and the borrow checker.
Perhaps this is a symptom of the code I tend to write: code that has a lot of tricky mathematical algorithms in it, rather than just "plumbing" data between different sources. But actually I doubt it, so I'm surprised this isn't a more common view.
95% of C++ programmers claim this, but C++ programs continue to be full of bugs, and they're usually exactly this kind of dumb bug.
> will be noticed by literally just running the code once.
Maybe. If what you're doing is "tricky mathematical algorithms", how would you even know if you were making these mistakes and not noticing them?
> the cost of visual noise of wrapper types, already higher just at the writing stage, then continues to be a cost every time you read the code. It's just not worth it for the very minor benefit it brings.
I find wrapper types are not a cost but a benefit for readability. They make it so much easier to see what's going on. Often you can read a function's prototype and immediately know what it does.
> Certainly, lifetime bugs are massively overshadowed by logic and algorithmic bugs.
Everyone claims this, but the best available research shows exactly the opposite, at least when it comes to security bugs (which in most domains - perhaps not yours - are vastly more costly): the most common bugs are still the really dumb ones, null pointer dereferences, array out of bounds, and double frees.
Type checking in compile time is do-able with templates, even better with constexpr.
The problem is, of course, each library have its own set of rules and they won't interop with each other.
The current system is a runtime system which has one type, and you set what the unit system is in the constructor. However it means adding a meter to a gallon is a runtime error.
The guild of software developers has no real standards, no certification, no proven practices outside <book> and <what $company is doing> while continuing to depend on the whims of project managers, POs and so-caled technical leaders and others which can’t tell quality code from their own ass.
There’s usually no money in writing high-quality software and almost everything in a software development project conspires against quality. Languages like Rust are a desperate attempt at fixing that with technology.
I guess it works, in a way, but these kind of blog posts just show us how inept most programmers are and why the Rust band-aid was needed in the first place.
It is all hit or miss. Everyone claims they do high-quality, critical software in public, while in private, they claim the opposite, that they are fast and break things, and programming is an art, not math.
And then you have venture capital firms now pushing "vibe coding."
Software development is likely the highest variance engineering space, sometimes and in some companies, not even being engineering, but "vibes."
It is interesting how this is going to progress forward. Are we going to have a situation like the Quebec Bridge [https://colterreed.com/the-failed-bridge-that-inspired-a-sim...]. The Crowdstrike incident taking down the whole airspace proved that is not enough. Market hacks in "decentralized exchanges," the same. Not sure where we are heading.
I guess we are waiting for some catastrophe that will have some venture capital liable for the vibe coding, and then we will have world wide regulation pushed on us.
Your hierarchy is backwards. Borrowing for algorithmic code is easy, it's for writing libraries that can be used by others where it's hard. Rust lets you - makes you - encode in in the API in a way C++ can't yet express.
> I just very rarely have problems with memory errors in C++ code bases with a little thought applied to lifetimes and use of smart pointers
If these are sparing you C++ bugs but causing you to struggle with the borrow checker, it's because you're writing code that depends on constraints that you can't force other contributors (or future you) to stick to. For example, objects are thread-unsafe by default. You can use expensive locks, or you can pray that nobody uses it wrong, but you can't design it so it can only be used correctly and efficiently.
So this definitely isn't some theoretical problem. I wouldn't even be surprised if you had made this mistake just hadn't noticed.
The main problem is that too many C++ engineers don't do any of that. They have some sort of learned helplessness when it comes to tooling. Rust for now seems to have core engineers in place that will do this sort of on behalf of everyone else. Language design aside, if it can find a way to sustain that kind of solid engineering, it will be hard to argue against.
I assure you that's not the case. Maybe you didn't make that mistake, but if you did I'm sure it sometimes went unnoticed. I've found those issues in my code and in other projects. Sometimes they even temporarily don't matter, because someone did a func(CONST, 0) instead of func(0, CONST) and it turns out CONST is 0 - however the next person gets a crash because they change 0 to 1. A lot of similar issues come from the last line effect https://medium.com/@Code_Analysis/the-last-line-effect-7b1cb... and can last for years without being noticed.
I feel the same. Rust certainly has many nice properties and features, but the borrow checker is a huge turn-off for me.
Unfortunately, many programmers are not competent. And the typical modern company will do anything in its power to outsource to often the lowest bidder, mismanage projects and generally reduce quality to the minimum acceptable to make money. That’s why one needs tools like Rust, Java, TypeScript, etc.
Unfortunately, Rust is still too hard for the average programmer, but at least it will hit them over the hands with a stick when they do something stupid. Another funny thing about Rust is that it’s attracting the functional programming/metaprogramming astronauts in droves, which is at odds with it being the people’s programming language.
I still don’t think it’s a valuable skill. Before it was lack of jobs and projects, which is still a problem. Now it’s the concern that it’s as fun as <activity>, except in a straitjacket.
I'd be curious to know what if any true fixes are coming down the line.
This talk: "To Int or to Uint, This is the Question - Alex Dathskovsky - CppCon 2024" https://www.youtube.com/watch?v=pnaZ0x9Mmm0
Seems to make it clear C++ is just broken. That said, and I wish he'd covered this, he didn't mention if the flags he brings up would warn/fix these issues.
I don't want a C++ where I have to remember 1000 rules and if I get one wrong my code is exploitable. I want a C++ where I just can't break the rules except when I explicitly opt into breaking them.
speaking of which, according to another C++ talk, something like 60% of rust crates are dependent on unsafe rust. The point isn't to diss rust. The point is that a safe C++ with opt into unsafe could be similar to rust's opt into unsafe
It's probably not the source of the stats you had in mind since it's discussing something slightly different, but the Rust Foundation built a tool called Painter [0] for this kind of analysis. According to that [1]:
> As of May 2024, there are about 145,000 crates; of which, approximately 127,000 contain significant code. Of those 127,000 crates, 24,362 make use of the unsafe keyword, which is 19.11% of all crates. And 34.35% make a direct function call into another crate that uses the unsafe keyword. Nearly 20% of all crates have at least one instance of the unsafe keyword, a non-trivial number.
> Most of these Unsafe Rust uses are calls into existing third-party non-Rust language code or libraries, such as C or C++.
To be honest, I would have expected that 60% number to be higher if it were counting unsafe anywhere due to unsafe in the stdlib for vocabulary types and for (presumably) common operations like iterator chains. There's also a whole other argument that the hardware is unsafe so all Rust code will depend on unsafe somewhere or another to run on actual hardware, but that's probably getting a bit into the weeds.
[0]: https://github.com/rustfoundation/painter
[1]: https://rustfoundation.org/media/unsafe-rust-in-the-wild-not...
That's not going into the weeds, by that logic (Nirvana fallacy) no language is safe, you're going to die, so why bother about anything? Just lie down and wait for bugs to eat you.
Cpp2 (Herb Sutter's brainchild): https://hsutter.github.io/cppfront/
Carbon (from Google): https://github.com/carbon-language/carbon-lang
In principle those could enable a safe subset by default, which would (except when explicitly opted-out) provide similar safety guarantees to Rust, at least at the language level. It's still up to the community to design safe APIs around those features, even if the languages exist. Rust has a massive advantage here that the community built the ecosystem with safety in mind from day 1, so it's not just the language that's safe, but the APIs of various libraries are often designed in an abuse-resistant way. C++ is too much of a zoo to ever do that in a coherent way. And even if you wanted to, the "safe" variants are still in their infancy, so the foundations aren't there yet to build upon.
I don't know what chance Cpp2 or Carbon have, but I think you need something as radical as one of these options to ever stand a chance of meaningfully making C++ safer. Whether they'll take off (and before Rust eats the world) is anyone's guess.
By the way, using "atoi" in a code snippet in 2025 and complaining that it is "not ideal" is, well, not ideal.
I tried again recently for a proxy I was writing thinking surely things have evolved at this point. Every single package manager couldn’t handle my very basic and very popular dependencies. I mean I tried every single one. This is completely insane to me.
Not to mention just figuring out how to build it after that which was a massive headache and an ongoing one.
Compared to Rust it’s just night and day.
Outside of embedded programming or some special use cases I have literally no idea why anyone would ever write C++. I’m convinced it’s a bunch of masochists
I am no expert so take it with a grain of salt, but that was how it felt for me.
With Nix, the package selection is great and repackaging is fairly straight forward.
Well there's your problem - no serious project uses one.
> I’m convinced it’s a bunch of masochists
People use cpp because it's a mature language with mature tooling and an enormous number of mature libraries. Same exact reason anyone uses any language for serious work.
C++ "gets away" with it because of templates. Many (most?) libraries are mostly templates, or at the very least contain templates. So you're forced into include-style dependencies and it's pretty painless. For a good library, it's often downloading a single file and just #include-ing it.
C++ is getting modules now, and maybe that will spur a new interest in package managers. Or maybe not, it might be too late.
The shenanigans people get into with CMake, Conan, vcpkg, and so on is a patchwork of nightmares and a huge time sink compared to superior solutions that people have gotten used to in other languages, including Rust.
C++ build systems are notoriously brittle. When porting a project to a new platform, you're never just porting the code, you are also porting your build system. Every single project is bespoke in some way, sometimes because of taste, but most of the time because of necessity.
It works because people spend a huge amount of time to make it work.
Everyone know the system is brittle, but somehow manage to handle it.
It’s the same build system for all of them.
Modern C++ has reduced a lot of typing through type inference, but otherwise the language is still strongly typed and essentially the same.
Meanwhile it was one of the reasons after Turbo Pascal, my next favourite programming language became C++.
For me mastering C, after 1992, only became mattered because as professional, that is something that occasionally I have to delve into so better know your tools even if the grip itself has sharp corners, otherwise everytime the option was constrained to either C or C++, I always pick C++.
It _is_ statically typed, though, so it falls in a weird category of loosely _and_ statically typed languages.
Meaning you're in a context where you have control on the C++ code you get to write. In my company, lots of people get to update code without strict guidelines. As a result, the code is going to be complex. I'd rather have a simpler and more restrictive language and I'll always favor Rust projects to C++ ones.
Of course it will probably not be as bad as C++, but still it will be complex and people will be looking for a simpler language.
That's not a good reason to stick with inferior tools now, though.
Rust is inferior to C++ for my needs. This is just a reflection on we started a large project in C++ before rust existed, and now have millions of lines. Getting Rust to work with our existing C++ is hard enough as to not be worth it. Rewriting in Rust would cost 1 billion dollars. Thus despite all the problems we have with C++ that Rust would solve, rust is inferior.
(Rust is working on their C++ interoperability story and we are making changes that will allow using Rust in the future so I reserve the right to change this story in a few years, but only time will tell)
There will always remain two types of languages: those that nobody uses and those that everybody complains about.
Not because Rust is doing anything wrong here, but because the first well-known language to really get some of these things right also happens to be a fairly low-level systems language with manual memory management.
A lot of my colleagues seem to primarily be falling in love with Rust because it's doing a good job at some basic things that have been well-known among us "academic" functional programming nerds for decades, and that's good. It arguably made inroads where functional programming languages could not because it's really more of a procedural language, and that's also good. Procedural programming is a criminally underrated and misunderstood paradigm. (As much as I love FP, that level of standoffishness about mutation and state isn't any more pragmatic than OOP being so hype about late binding that every type must support it regardless of whether it makes sense in that case.)
But they're also thoroughly nerdsniped by the borrow checker. I get it, you have to get cozy with the borrow checker if you want to use Rust. But it seems like the moral opposite of sour grapes to me. The honest truth is that, for most the software we're writing, a garbage collected heap is fine. Better, even. Shared-nothing multithreading is fine. Better, even.
So now we're doing more and more things in Rust. Which I understand. But I keep wishing that I could also have a Rust-like language that just lets me have a garbage collector for the 95% of my work where the occasional 50ms pause during run-time just isn't a big enough problem to justify a 50% increase in development and maintenance effort. And then save Rust for the things that actually do need to be unmanaged. Which is maybe 5% of my actual work, even if I have to admit that it often feels like 95% of the fun.
It also has half implementations of all the useful features (no distinct enum variant types, traits only half-exist) because you have to code to the second, hidden language that it actually compiles to.
What do you mean by traits only half-existing?
[1] https://doc.rust-lang.org/stable/std/mem/fn.discriminant.htm...
The toolchain might be a first candidate. Rust's toolchain feels so very modern, and OCaml's gives me flashbacks to late nights trying to get my homework done on the department's HP-UX server back in college.
The conversation function is more language issue. I don’t think there is a simple way of creating a rust equivalent version because C++ has implicit conversions. You could probably create a C++ style turbofish though, parse<uint32_t>([your string]) and have it throw or return std::expected. But you would need to implement that yourself, unless there is some stdlib version I don’t know of.
Don’t conflate language features with library features.
And -Wconversion might be useful for this but I haven’t personally tried it since what Matt is describing with explicit types is the accepted best practice.
I have my gripes with rust, more it’s ecosystem and community that the core language though. I won’t ever say it’s a worse language than C++.
Maybe that’s more of a bias with rust media stuff, seems to be going deeper into that rabbit hole though.
The community was at least, may still be, very sensitive to rust being criticised. I genuinely brought an example of a provably correct piece of code that the borrow checker wouldn’t accept, interior mutability problem. I was I should build a massive abstraction to avoid the problem and that I’m holding it wrong… Put me off the language for a few years, it shouldn’t have, I should have just ignored the people and continued on but we all get older and learn things.
My favorite is when Rust gets dragged into weird American "culture wars" - somehow, it's a "woke" language? (And somehow, that's a problem?)
But yeah, the language docs are pretty up front about the fact that the borrow checker sometimes rejects code that is provably fine, so it's a weird criticism. The nontrivial breakthrough was that Rust proved that a huge amount of nontrivial code can be written within the restrictions of the borrow checker, eliminating swaths of risk factors without a resource penalty.
I like the Rust language quite a bit. I find the Rust community to be one of the most toxic places in the entire tech business. Your mileage may vary and that's fine of course - but plenty of people want to stay far away from a community that acts like the Rust community does.
On the surface it sounds like a community with such deep pathology that it will take at least a generation following a complete change of leadership to have a chance at recovery. But there are three sides to every story.
> On the surface it sounds like a community with such deep pathology
First what sort of pathology? You're confusing community with leadership.
The community didn't want this, and leadership was doing a restructuring due to change from Foundation and Project. Welcome to OSS projects.
Second as opposed to what?
A community at the beck and call of your CEO dictator? I'm a Java dev, so all it takes for Java to die is for One Rich Asshole Called Larry Ellison to decide that they (ORACLE) are inserting two mandatory ads to be watched during each Java compiler run. Or god forbid that they will monetize Java.
Plus if I had 24/7 insight into how Oracle worked, I'd probably also be much less inclined to join Java as a new dev.
To paraphrase Tolstoy: (All perfect languages are dead;) Each imperfect language is imperfect in its own way.
Especially if you're coming from different langs.
If I delete/rename a field of a class in any statically checked language, it's going to report a compile error, and it's still a breaking change. Same thing with named arguments.
Even if you don't use keyword args your parameter names are still part of your API surface in Python because callers can directly name positional args. Only recently have you been able enforce unnamed positional only args as well as the opposite.
Quantity(100) is counterproductive here, as that doesn't narrow the type, it does the opposite, it casts whatever value is given to the type, so even Quantity(100.5) will still work, while just plain 100.5 would have given an error with '-Wconversion'.
Additionally, `clang-tidy` catches this via `bugprone-narrowing-conversions` and your linter will alert if properly configured.
I do run clippy on my Rust projects, but that's a matter of style and readability, not correctness (for the most part!).
I appreciate that there are guardrails in a tool like rust, I also appreciate that sharp tools like c exist, they both have advantages.
There are also more type-safe conversion methods that perform a more focused conversion. Eg a widening conversion from i8 -> i16 can be done with .into(), a narrowing conversion from i16 -> i8 can be done with .try_into() (which returns a Result and forces you to handle the overflow case), a signed to unsigned reinterpretation like i64 -> u64 can be done with .cast_unsigned(), and so on. Unlike `as` these have the advantage that they stop compiling if the original value changes type; eg if you refactor something and the i8 in the first example becomes an i32, the i32 -> i16 conversion is no longer a widening conversion so the `.into()` will fail to compile.
Conversions may be fine and even useful in many cases, in this case it isn’t. Converting to std::variant or std::optional are some of those cases that are really nice.
So is this really a language comparison, or what libraries are available for each language platform? If the latter, that's fine. But let's be clear about what the issue is. It's not the language, it's what libraries are included out of the box.
To entertain the argument, though, it may not be a language issue, but it certainly is a selling point for the language (which to me indicates a "language issue") to me if the language takes care of this "library" (or good defaults as I might call them) for you with no additional effort -- including tight compiler and tooling integration. That's not to say Rust always has good defaults, but I think the author's point is that if you compare them apples-to-oranges, it does highlight the different focuses and feature sets.
I'm not a C++ expert by any stretch, so it's certainly a possibility that such a library exists that makes Rust's type system obsolete in this discussion around correctness, but I'm not aware of it. And I would be incredibly surprised if it held its ground in comparison to Rust in every respect!
From your link:
> Nevertheless, research has produced positive empirical evidence supporting a weaker version of linguistic relativity:[5][4] that a language's structures influence a speaker's perceptions, without strictly limiting or obstructing them.
And while we're at it, why not use assembly? It's all just "syntactic sugar" over bits, doesn't make any difference, right?
Yes, you can ask the std::sto* functions for the position where they stopped because of invalid characters and see if that position is the end of the string, but that is much more complex than should be needed for something like that.
These functions don't convert a string to a number, they try to extract a number from a string. I would argue that most of the time, that's not what you want. Or at least, most of the time it's not what I need.
atoi has the same problem of course, but even worse.
To get the equivalent of Rust's
if let Ok(x) = input.parse::<i32>() {
println!("You entered {x}");
} else {
eprintln!("You did not enter a number");
}
you need something like: int x{};
auto [ptr, ec] = std::from_chars(input.data(), input.data() + input.size(), x);
if (ec == std::errc() && ptr == input.data() + input.size()) {
std::cout << "You entered " << x << std::endl;
} else {
std::cerr << "You did not enter a valid number" << std::endl;
}
I find the choice to always require a start and and end position, and not to provide a method that simply passes or fails, to be quite baffling. In C++26, they also added an automatic boolean conversion for from_chars' return type to indicate success, which considers "only consumed half the input from the start" to be a success.Maybe I'm weird for mostly writing code that does straightforward input-to-number conversions and not partial string parsers, but I have yet to see a good alternative for Rust's parse().
It's bad if it alters values (e.g. rounding). Promotion from one number representation to another (as long as it preserves values) isn't bad. This is trickier than it might seem, but Virgil has a good take on this (https://github.com/titzer/virgil/blob/master/doc/tutorial/Nu...). Essentially, it only implicitly promotes values in ways that don't lose numeric information and thus are always reversible.
In the example, Virgil won't let you pass "1000.00" to an integer argument, but will let you pass "100" to the double argument.
// OK implicit promotions
def x1: i20;
def f1: float = x1;
def x2: i21;
def f2: float = x2;
def x3: i22;
def f3: float = x3;
def x4: i23;
def f4: float = x4;
// compile error!
def x5: i24;
def f5: float = x5; // requires rounding
This also applies to casts, which are dynamically checked. // runtime error if rounding alters value
def x5: i24;
def f5: float = float.!(x5);
Depends on what you want from such a hierarchy, of course, but there is for example an injection i32 -> f64 (and if you consider the i32 operations to be undefined on overflow, then it’s also a homomorphism wrt addition and multiplication). For a more general view, various Schemes’ takes on the “numeric tower” are informative.
https://doc.rust-lang.org/stable/rust-by-example/conversion/...
https://doc.rust-lang.org/stable/rust-by-example/conversion/...
If the conversion will always succeed (for example an 8-bit unsigned integer to a 32-bit unsigned integer), the From trait would be used to allow the conversion to feel implicit.
If the conversion could fail (for example a 32-bit unsigned integer to an 8-bit unsigned integer), the TryFrom trait would be used so that an appropriate error could be returned in the Result.
These traits prevent errors when converting between types and clearly mark conversions that might fail since they return Result instead of the output type.
You would do
[orderbook sendOrderWithSymbol:"foo" buy:true quantity:100 price:1000.00]
Cannot confuse that!
(I never used swift, I think it retains this?)
https://learn.adacore.com/courses/Ada_For_The_CPP_Java_Devel...
I suppose it could still be added in the future; there are probably several syntax options that would be fully backward-compatible, without even needing a new Rust edition.
Sorry, I don't agree.
First, code is read far more often than written. The few seconds it takes to type out the arguments are paid again and again each time you have to read it.
Second, this is one of the few things that autocomplete is really good at.
Third, almost everybody configures their IDE to display the names anyway. So, you might as well put them into the source code so people reading the code without an IDE gain the benefit, too.
Finally, yes, they are redundant. That's the point. If the upstream changes something and renames the argument without changing the type I probably want to review it anyway.
I always thought it was called godbolt because it's like... Zeus blowing away the layers of compilation with his cosmic power, or something. Like it's a herculean task
Creating strong types for currency seems like common sense, and isn't hard to do. Even the Rust code shouldn't be using basic types.
So no implicit type conversions, safer strings, etc.
A big hard to solve problem is you are likely using a C because of the ecosystem and/or the performance characteristics. Because of the C header/macro situation that becomes just a huge headache. All the sudden you can't bring in, say, boost because the header uses the quirks excluded from your smaller C language.
* "No implicit type conversions" is trivial, and hardly worth mentioning. Trapping on both signed and unsigned overflow is viable but for hash-like code opting in to wrapping is important.
* "Safer strings" means completely different things to different people. Unfortunately, the need to support porting to the new language means there is little we can do by default, given the huge amount of existing code. We can however, add new string types that act relatively uniformly so that the code can be ported incrementally.
* For the particular case of arrays, remember that there are at least 3 different ways to compute its length (sentinel, size, end-pointer). All of these will need proper typing support. Particularly remember functions that take things like `(begin, middle end)`, or `(len, arr1[len], arr2[len])`.
* Support for nontrivial trailing array-or-other datums, and also other kinds of "multiple objects packed within a single allocation", is essential. Again, most attempted replacements fail badly.
* Unions, unfortunately, will require much fixing. Most only need a tag logic (or else replacement with bitcasting), but `sigval` and others like it are fundamentally global in nature.
* `va_list` is also essential to support since it is very widely used.
* The lack of proper C99 floating-point support, even in $CURRENTYEAR, means that compile-to-C implementations will not be able to support it properly either, even if the relevant operations are all properly defined in the new frontend to take an extra "rounding mode" argument. Note that the platform ABI matters here.
* There are quite a few things that macros are used for, but ultimately this probably is a finite set so should be possible to automatically convert with a SMOC.
Failure to provide a good porting story is the #1 mistake most new languages make.
Except for some missing pieces, this is safe and I have a prototype based on GCC that would warn about any unsafe features. va_list can be safely used at least with format strings and for union I need an annotations. Life times are the bigger outstanding issue.
What do you mean? What's wrong with floating point numbers in C99?
The core of Rust is actually very simple: Struct, Enum, Functions, Traits.
> eventually I came to the depressing conclusion that there’s no way to get a group of C experts — even if they are knowledgable, intelligent, and otherwise reasonable — to agree on the Friendly C dialect. There are just too many variations, each with its own set of performance tradeoffs, for consensus to be possible.
Safer strings is harder, as it gets into the general memory safety problem, but people have tried adding safer variants of all the classic functions, and warnings around them.
Sun actually did it right with Java, recognizing that if they mainly targeted SunOS/Solaris, no one would use it. And even though Oracle owns it now, it's not really feasible for them to make it proprietary.
Apple didn't care about other platforms (as usual) for quite a long time in Swift's history. Microsoft was for years actively hostile toward attempts to run .NET programs on platforms other than Windows. Regardless of Apple's or MS's current stance, I can't see myself ever bothering with Swift or C#/F#/etc. There are too many other great choices with broad platform and community support, that aren't closely tied to a corporation.
It's been 10 years. Even before that, no action was ever taken against Mono nor any restriction put or anything else. FWIW Swift shares a similar story, except Apple started to care only quite recently about it working anywhere else beyond their platforms.
Oh, and by the way, you need to look at these metrics: https://dotnet.microsoft.com/en-us/platform/telemetry
Maybe take off the conspiracy hat?
> There are too many other great choices with broad platform and community support
:) No, thanks, I'm good. You know why I stayed in .NET land and didn't switch to, say, Go? It's not that it's so good, it's because most alternatives are so bad in one or another area (often many at the same time).
Aliasing rules can also be problematic in some circumstances (but also beneficial for compiler optimisations).
And the orphan rule is also quite restrictive for adapting imported types, if you're coming from an interpreted language.
https://loglog.games/blog/leaving-rust-gamedev/ sums up the main issues nicely tbh.
I bet assembly programmers said the same about C!
Every language has relatively minor issues like these. Seriously pick a language and I can make a similar list. For C it will be a very long list!
It's important to be careful here: a lot (most? all?) of these rejections are programs that could be sound in a hypothetical Rust variant that didn't assert the unique/"noalias" nature of &mut reference, but are in fact unsound in actual Rust.
Anyhow, I won't go back to C++ land. Better this than whatever arcane, 1000-line, template-hell error message that kept me fed when I was there.
Rust chose (intentionally or otherwise) to do the opposite of the many things that C++ does, because C++ does it wrong. And C++ does it wrong because we didn't know any better at the time, and the world, pre-internet, was much less connected. Someone had to do it first (or first-ish).
The main thing I like about Rust is the tooling. C++ is death by a thousand build systems and sanitizers.
Yes, but the strength of Rust's type system means you're forced to handle those bad dynamic values up front (or get a crash, if you don't). That means the rest of your code can rest safe, knowing exactly what it's working with. You can see this in OP's parsing example, but it also applies to database clients and such
That said, Rust also makes it very easy to define your own types that can only be constructed/unpacked in limited ways, which can enforce special constraints on their contents. And it has a cultural norm of doing this in the standard library and elsewhere
Eg: a sibling poster noted the NonZero<T> type. Another example is that Rust's string types are guarantees to always contain valid UTF-8, because whenever you try and convert a byte array into a string, it gets checked and possibly rejected.
Even Rust's types aren't going to help you if two arguments simply have the same types.
Or another (dummy) example transfer(accountA, accountB). Make two types that wrap the same type but one being a TargetAccount and the other SourceAccount.
Use the type system to help you, don’t fight it.
Sound type systems are equivalent to proof systems.
You can use them to design data structures where their mere eventual existence guarantee the coherence and validity of your program’s state.
The basic example is “Fin n” that carries at compile time the proof that you made the necessary bounds checks at runtime or by construction that you never exceeded some bound.
Some languages allow you to build entire type level state machines! (eg. to represent these transactions and transitions)
pydantic is a library for python, but I'm not aware of anything similar in rust or golang that can do this yet? (i.e. not just schema validation, but value range validation too)
My median Rust coding session isn't much different, I also write code that doesn't work, but it's caught by the compiler. Now, most people call this "fighting with the borrow checker" but I call it "avoiding segfaults before they happen" because when I finally get through the compiler my code usually "just works". It's that magical property Haskell has, Rust also has it to a large extent.
So then what's different about Rust vs. C++? Well Rust actually provides me a path to get to a working program whereas C++ just leaves me with an error message and a treasure map.
What this means is that although I'm a bad programmer, Rust gives me the support I need to build quite large programs on my own. And that extends to the crate ecosystem as well, where they make it very easy to build and link third party libraries, whereas with C++ ld just tells you that it had a problem and you're left on your own to figure out exactly what.
There are some places I won’t be excited to use rust, and media heavy code is one of those places…
Give an example. I have been programming in C/C++ for close to 30 years and the places where I worked had very strict guidelines on C++ usage. We could count the number of times we shot ourselves due to the language.
Low level languages always came with a set of usage guidelines. Either you make the language safe for anyone that they can't shoot themselves in the foot and end up sacrificing performance, or provide guidelines on how to use it while retaining the ability to extract maximum performance from the hardware.
C/C++ shouldn't be approached like programming in Javascript/Python/Perl.
If you don't want unsafe, you can make use of this safe derive: https://docs.rs/bytemuck/latest/bytemuck/trait.TransparentWr...
template explicit Quantity(T quantity) : m_quantity(quantity) {
sendOrder("GOOG", false, Quantity(static_cast(atoi("-100"))),
In Rust what they'd do if they realised there's a problem like this is make explicit conversion the default, with a language Edition, and so within a few years just everybody is used to the improved language. In C++ instead you have to learn to write all the appropriate bugfix keywords all over your software, forever.
Agreed. The history here is compatibility with C type conversion.
I just expected a more compelling Rust /C++ comparison but we got an emphasis of a poorly designed feature which the standard has taken steps to improve already.
In C++ when we define a class Foo (a thing which doesn't exist in C) and we write a constructor Foo(Bar x) (which doesn't exist in C) which takes a single parameter [in this case a Bar named x], that is implicitly adopted as a conversion for your new user defined type and by default without any action on your part the compiler will just invoke that constructor to make a Bar into a Foo whenever it thinks that would compile.
This is a bad choice, and it's not a C choice, it's not about "compatibility".
No.
> it's not a C choice, it's not about "compatibility".
One of the design of C++ classes is that you can create a class as powerful as int - you can’t do that without implicit conversion.
This is just another thing on the deep pile of wrong defaults in C++.
Been true in all statically typed languages for decades!
It's good advice.
OK but "this makes for a nice example" is silly, given that the only reason the example throws an error is that you used a float here, when both `quantity` and `price` would have been ints.
error[E0308]: arguments to this function are incorrect
--> order/order-1.rs:7:5
|
7 | send_order("GOOG", false, 1000.00, 100); // Wrong
| ^^^^^^^^^^ ------- --- expected `f64`, found `{integer}`
| |
| expected `i64`, found `{float}`
I love Rust, but this is artificial.In the C and C++ people tend to actually write the file descriptor will be an int, and the timeout will be an int, and the user account number will be an int, and the error code will be an int... because the language doesn't help much when you don't want that.
In the Rust people actually write the file descriptor will be an OwnedFd (from the stdlib) and the timeout will be a Duration (from the stdlib), and user account number might be their own AcctNo and that error code is maybe MyCustomError
This is a language ethos thing, C++ got string slices after Rust despite the language being much older. String slices which are a really basic central idea, but eh, C++ programmers a decade ago just had char * pointers instead and tried not to think about it too much. Still today plenty of C++ APIs don't use string slices, don't work with a real duration type, and so on. It's technically possible but the language doesn't encourage this.
What C++ does encourage is magic implicit conversion, as with this f64 versus i64 case.
#include <exception>
#include <sstream>
template <typename From, typename To>
void convert_safely_helper_(From const& value, To& result) {
std::stringstream sst;
sst << value;
sst >> result;
}
// Doesn't throw, just fails
template <typename From, typename To>
bool convert_safely(From const& value, To* result) {
From check;
convert_safely_helper_(value, *result);
convert_safely_helper_(*result, check);
if (check != value) {
*result = To();
return false;
}
return true;
}
// Throws on error
template <typename To, typename From>
To convert_safely(From const& value) {
To result;
if (!convert_safely(value, &result))
throw std::logic_error("invalid conversion");
return result;
}
#include <iostream>
template <typename Buy, typename Quantity, typename Price>
void sendOrder(const char* symbol, Buy buy, Quantity quantity, Price price) {
std::cout << symbol << " " << convert_safely<bool>(buy) << " "
<< convert_safely<unsigned>(quantity) << " " << convert_safely<double>(price)
<< std::endl;
}
#define DISPLAY(expression) \
std::cout << #expression << ": "; \
expression
template <typename Function>
void test(Function attempt) {
try {
attempt();
} catch (const std::exception& error) {
std::cout << "[Error: " << error.what() << "]" << std::endl;
}
}
int main(void) {
test([&] { DISPLAY(sendOrder("GOOG", true, 100, 1000.0)); });
test([&] { DISPLAY(sendOrder("GOOG", true, 100.0, 1000)); });
test([&] { DISPLAY(sendOrder("GOOG", true, -100, 1000)); });
test([&] { DISPLAY(sendOrder("GOOG", true, 100.5, 1000)); });
test([&] { DISPLAY(sendOrder("GOOG", 2, 100, 1000)); });
}
Output: sendOrder("GOOG", true, 100, 1000.0): GOOG 1 100 1000
sendOrder("GOOG", true, 100.0, 1000): GOOG 1 100 1000
sendOrder("GOOG", true, -100, 1000): GOOG 1 [Error: invalid conversion]
sendOrder("GOOG", true, 100.5, 1000): GOOG 1 [Error: invalid conversion]
sendOrder("GOOG", 2, 100, 1000): GOOG [Error: invalid conversion]
Rust of course leaves "less footguns laying around", but I still prefer to use C++ if I have my druthers.And don't get me started on dynamic graphs.
I would happily use Rust over C++ if it had all other improvements but similar memory management. I am completely unproductive with Rust model.
And while unsafe Rust does have some gotchas that vanilla modern C++ does not, I would much rather have a 99% memory-safe code base in Rust than a 100% "who knows" code base in C++.
You gotta get your timing right. Right hook followed by kidney shot works every time.
But once you are _maintaining_ applications, man it really does feel like absolute magic. It's amazing how worry-free it feels in many respects.
Plus, once you do embrace it, become familiar, and start forward-thinking about these things, especially in areas that aren't every-nanosecond-counts performance-wise and can simply `Arc<>` and `.clone()` where you need to, it is really quite lovely and you do dramatically less fighting.
Rust is still missing a lot of features that other more-modern languages have, no doubt, but it's been a great ride in my experience.
The idea with Rust is that you get safety...not that you get safety at the cost of performance. The language forces you into paying a performance cost for using patterns when it is relatively easy for a human to reason about safety (imo).
You can use `unsafe` but you naturally ask yourself why I am using Rust (not rational, but true). You can use lifetimes but, personally, every time I have tried to use them I haven't been able to indicate to the compiler that my code is actually safe.
In particular, the protections for double-free and free before use are extremely limiting, and it is possible to reason about these particular bugs in other ways (i.e. defer in Go and Zig) in a way that doesn't force you to change the way you code.
Rust is good in many ways but the specific problem mentioned at the top of this chain is a big issue. Just saying: don't use this type of data structure unless you pay performance cost isn't an actual solution to the problem. The problem with Rust is that it tries to force safety but doesn't have good ways for devs to tell the compiler code is safe...that is a fundamental weakness.
I use Rust quite a bit, it isn't a terrible language and is worth learning but these are big issues. I would have reservations using the language in my own company, rather than someone else's, and if I need to manage memory then I would look elsewhere atm. Due to the size of the community, it is very hard not to use Rust too (for example, Zig is great...but no-one uses it).
The pragmatism of Rust means that you can use reference counting if it suits your use case.
Unsafe also doesn't mean throwing out the Rustiness of Rust, but others have written more extensively about that and I have no personal experience with it.
> The problem with Rust is that it tries to force safety but doesn't have good ways for devs to tell the compiler code is safe...that is a fundamental weakness.
My understanding is that this is the purpose of unsafe, but again, I can't argue against these points from a standpoint of experience, having stuck pretty strictly to safe Rust.
Definitely agree that there are issues with the language, no argument there! So do the maintainers!
> if I need to manage memory then I would look elsewhere atm
Haha I have the exact opposite feeling! I wouldn't try to manage memory any other way, and I'm guessing it's because memory management is more intuitive and well understood by you than by me. I'm lazy and very much like having the compiler do the bulk of the thinking for me. I'm also happy that Rust allows for folks like me to pay a little performance cost and do things a little bit easier while maintaining correctness. For the turbo-coders out there that want the speed and the correctness, Rust has the capability, but depending on your use case (like linked lists) it can definitely be more difficult to express correctness to the compiler.
I think the issue that people have is that they come into Rust with the expectation that these problems are actually solved. As I said, it would be nice if lifetimes weren't so impossible to use.
The compiler isn't doing the thinking if you have to change your code so the compiler is happy. The problem with Rust is too much thinking: you try something, compiler complains, what is the issue here, can i try this, still complain, what about this, etc. There are specific categories of bugs that Rust is trying to fix that don't require the changes that Rust requires in order to ensure correctness...if you use reference counter, you can have more bugs.
(filled with boilerplate, strange Rust idioms, borrow_unchecked, phantomdata, and you still have to manage lifetimes annotations).
All safe code is built on a foundation of unsafe code.
There's exactly as much as there was before though. The entire point of the Rust safety paradigm is that you can guarantee that unsafe code is confined to only where it is needed. Nobody ever promised "you will never have to write unsafe code", because that would be clearly unfeasible for the systems programming domain Rust is trying to work in.
I frankly cannot understand why people are so willing to throw the baby out with the bathwater when it comes to Rust safety. It makes no sense to me to say "my code needs to have some % unsafe, so I'll just make it 100% unsafe then" (which is effectively what one does when they use C or C++ instead). Why insist on not taking any safety gains at all when one can't have 100% gain?
#include <iostream>
struct Price { double x; };
struct Quantity { int x; };
void sendOrder(const char *symbol, bool buy, Quantity quantity, Price price) {
std::cout << symbol << " " << buy << " " << quantity.x << " " << price.x
<< std::endl;
}int main(void) {
sendOrder("GOOG", false, Quantity{100}, Price{1000.00}); // Correct
sendOrder("GOOG", false, Price{1000.00}, Quantity{100}); // compiler error
}If you're trying to get it to type check, you have to make a type first.
I don't appreciate these arguments, and view them as disingenuous.
My reading of this article wasn't to say that these things are impossible in C++, just that they're not the default or the first thing you try as a beginner is perfectly wrong.
https://learn.adacore.com/courses/intro-to-ada/chapters/stro...
And there is an Ada implementation that is part of GCC:
For comparison, Swift uses "+" for checked addition and as a result, majority of developers use checked addition by default. And in Rust due to its poor design choices most developers use wrapping addition even where a checked addition should be used.
[1] https://doc.rust-lang.org/src/alloc/vec/mod.rs.html#2010
Of course, if you don't trust the standard library, you can turn on overflow checks in release mode too. However, the standard library is well tested and I think most people would appreciate the speed from eliding redundant checks.
[0]: https://doc.rust-lang.org/src/alloc/raw_vec.rs.html#651
[1]: https://doc.rust-lang.org/src/alloc/raw_vec.rs.html#567
Your example code is not because it is faster to write, it is because it is impossible for its to overflow on that line.
Or just because on Intel CPUs it has overhead, we must forget about writing safer code?
This line could only overflow after we need to grow the container, so immediately this means the type T isn't a ZST as the Vec for ZSTs doesn't need storage and so it never grows.
Because its not a ZST the maximum capacity in Rust is never bigger than isize::MAX which is an entire binary order of magnitude smaller than usize::MAX, as a result len + 1 can't overflow the unsigned type, so this code is correct as written.
https://play.rust-lang.org/?version=stable&mode=debug&editio... https://play.rust-lang.org/?version=stable&mode=debug&editio...
[0]: https://play.rust-lang.org/?version=stable&mode=debug&edition=2024&gist=847dc401e16fdff14ecf3724a3b15a93
[1]: https://doc.rust-lang.org/cargo/reference/profiles.html
Rust developers made a poor choice. They should have made a special function for unchecked addition and have "+" operator always panic on overflow.
The checks being on in the debug config means your tests and replications of bug reports will catch overflow if they occur. If you are working on some sensitive application where you can't afford logic bugs from overflows but can afford panics/crashes, you can just turn on checks in release mode.
If you are working on a library which is meant to do something sensible on overflow, you can use the wide variety of member functions such as 'wrapping_add' or 'checked_add' to control what happens on overflow regardless of build configuration.
Finally, if your application can't afford to have logic bugs from overflows and also can't panic, you can use kani [0] to prove that overflow never happens.
All in all, it seems to me like Rust supports a wide variety of use cases pretty nicely.
By default, they're on during debug mode and off in release mode.
There are also specific methods for doing *erflow-checked arithmetic if you like.
Just insist that the programmer prove that overflow can't occur, and reject programs where the programmer couldn't or wouldn't do this.
I imagine faceless shameless mega-corps with thousands of Rust/Go peons coding away on the latest soulless business apps. Designed to funnel the ignorant masses down corridors of dark pattern click bait and confusing UX.
Having exposed my biases, happy to be proven wrong. Why are game studios still using C++? Because that's the language game programmers know and feel comfortable with? Or some other reason?
Embedded is still C, games are C++, scientific and data are Python and R (I'm talking in general here). What is the niche for Rust?
Games are written in C++ because game engines and tooling have person-centuries of work poured into them. Reimplementing Unreal Engine in Rust would require another few person-centuries of work, which is an investment that doesn't really make sense. Economically, dealing with the shortcomings of C++ is much, much cheaper.
But Rust is definitely encroaching in all of these areas. Embedded Rust is doing great, scientific Rust is getting there (check pola.rs). Rust is an obvious candidate for the next big game engine, and it is already quite viable for indie undertakings, though it is still early days.
I think Rust has too high a learning curve, and too many features, for novice programmers in general.
> Embedded is still C, games are C++, scientific and data are Python and R (I'm talking in general here). What is the niche for Rust?
Rust has already made huge inroads in CLIs and TUIs, as far as I can tell. Embedded is a slow-moving beast by design, but it seems to me (as someone in an adjacent area) that it could be a big win there, particularly in places that need safety certification.
All the stories of people using Rust for game development are about people who tried it and find that it doesn't fit: It makes experimentation and exploration slow enough that the reduction in minor bugs in game logic isn't really worth it.
Rust is a bit more systems focused for low level stuff. See inclusions in the Linux kernel. Also seeing some traction in the WASM space given that it’s not GC
They’re both quite versatile though so above are pretty gnarly generalisations.
Zig is in a similar space as these
Numpy use C/C++ because BLAS use C/C++ Torch originally use Lua, then switch to Python because popularity