---
Glad to see fluffy negative articles about Rust shooting up the first slot of HN in 20 minutes. It means Rust has made finally made it mainstream :)
---
The points, addressed, I guess?
- Rust has panics, and this is bad: ...okay? Nobody is writing panic handling code, it's not a form of error handling
- Rust inserts Copy, Drop, Deref for you: it would be really annoying to write Rust if you had to call `.copy()` on every bool/int/char. A language like this exists, I'm sure, but this hasn't stopped Rust from taking off
- Fetishization of Efficient Memory Representation: ... I don't understand what the point is here. Some people care about avoiding heap allocations? They're a tool just like anything else
- Rewrite anything and it gets faster: okay sure, but there are limits to how fast I can make a Py/JS algorithm vs a compiled language, and Rust makes writing compiled code a bit easier. People probably aren't rewriting slow Python projects in C these days
- Rust is as complex as C++: ...no, it's not. Rust really hasn't changed much in the past 6 years. A few limitations being lifted, but nothing majorly new.
- Rust isn't as nice of community as people think: subjective maybe? People are nice to me at conferences and in discussion rooms. There's occasional drama here and there but overall it's been pretty quiet for the past year.
- Async is problematic: Async Rust really is fine. There's a huge meme about how bad it is, but really, it's fine. As a framework author, it's great, actually. I can wrap futures in a custom Poll. I can drive executors from a window event loop. Tokio's default choice of making `spawn` take Send/Sync futures is an odd one - occasionally cryptic compile errors - but you don't need to use that default.
I'm unsure why this article is so upvoted given how vapid the content is, but it does have a snappy title, I guess.
Maybe not yet, but it is heading in that direction; and I only say this because of the absolutely giant pile of features in unstable that seem to be stuck there, but I hope will eventually make its way to stable at some point.
> Async Rust really is fine
I dunno. Always thought it was too complicated, but as another person pointed out avoiding Tokyo::spawn solves many issues (you said this too, I think). So maybe, not Rust's fault :D
It's definitely getting more complex, but C++ has a huge lead haha. C++ is like a fractal in that you can look at almost any feature closer and closer to reveal more and more complexity, and there are a lot of features... Here's a page on just one dark corner of the language: https://isocpp.org/wiki/faq/pointers-to-members and it interacts in interesting ways with all the other corners (like virtual vs non-virtual inheritance) in fun and exciting ways...
Also, there are far more ways to cause UB in C++. Rust has a big lead on formalizing what constitutes UB, and even those rules you only need to learn if you are using "unsafe", whilst in C++ you don't have that luxury.
As well as lots of Undefined Behaviour, C++ also has what its own experts call "False positives for the question is this a C++ program" the Ill-Formed No Diagnostic Required features, nothing like these exist in Rust, they're cases where you can write what appears to be C++ but actually although there are is no error or warning from the compiler your entire program has no meaning and might do absolutely anything from the outset. I've seen guesses that most or even all non-trivial C++ invokes IFNDR. So that's categorically worse than Undefined Behaviour.
Finally, C++ has cases where the standard just chooses not to explain how something works because doing so would mean actually deciding and that's controversial so in fact all C++ where this matters also has no defined meaning and no way for you to discover what happens except to read the machine code emitted by your compiler, which entirely misses the point of a high level programming language.
One of the things happening in Rust's stabilization process is solving those tough issues, for example Aria's "Strict Provenance experiment" is likely being stabilized, formally granting Rust a pointer provenance model, something C++ does not have and C23 had to fork into a separate technical document to study.
Now, let me explain why I felt that way. First and foremost, the phrase "undefined behavior" only applies to C and C++ because the specifications of those languages define it. The statement that Rust has no UB does not make sense because Rust has no specification, and all behavior is defined by the default implementation.
For example, C/C++ specifications state that using a pointer after calling "free()" on it is UB. But an implementation can make it well-defined by adding a GC and making "free()" a no-op. Hence, memory safety is entirely orthogonal to UB.
Another example: signed overflow being UB is not a memory safety problem unless the value is used for array indexing. Also, it is possible to enable bounds checking in STL containers (like _GLIBCXX_ASSERTIONS).
It seems like that a lot of Rust fans read John Regehr's posts and use "undefined behavior" as a boogeyman to throw shade at C/C++. They repeat the same points ad nauseam. It also helps that the phrase "undefined behavior" evokes strong emotions (eg., "nasal demons"). I see the parent commenter doing this frequently and sometimes[1] even in the C++ subreddit (of all the places!). How is this not obnoxious?
Here[2] is another person doing the same, but in a spicier tone. Linked lists and graphs are safe if you have an isoheap allocator (look at Fil-C).
You can say that it is moral to endlessly reiterate the problems of unsafe languages, because it could lead to more secure software. But see the reply to my other comment by "hyperbrainer"[3] which says that Rust is "completely" memory safe, which is entirely wrong[4]. It is hard not to suspect the motives of those who claim to be concerned about memory safety.
[1] -https://old.reddit.com/r/cpp/comments/1fu0y6n/when_a_backgro... [2] - https://news.ycombinator.com/item?id=32121622 [3] - I am unable to reply because of the depth. [4] - Rust requires unsafe to do a lot of things which can be done in safe code in a GC'd language. Thus, unsafe is pretty common in Rust than most GC'd languages. If a segfault can literally kill a person, it is absolutely immoral to choose Rust over Java (it does not matter that Rust "feels" safer than Java).
Your point [4] is very silly because you're assuming that while the unsafe code implementing a safe Rust interface might be flawed the code implementing a safe Java interface such as its garbage collector (which will often be C++) cannot be. As we'd expect, both these components are occasionally defective, having been made by error prone humans, such flaws are neither impossible nor common in either system. There are indeed even safer choices, and I've recommended them - but they're not Garbage Collected.
> First and foremost, the phrase "undefined behavior" only applies to C and C++ because the specifications of those languages define it.
Nope, those words have an ordinary meaning and are indeed used by Rust's own documentation, for example the Rustonomicon says at one point early on, "No matter what, Safe Rust can't cause Undefined Behavior". The purpose there is definitional, it's not a boast about how awesome Rust is, it's a claim that if there is Undefined Behaviour that's not because of the safe Rust, there's a soundness problem somewhere else.
> Another example: signed overflow being UB is not a memory safety problem unless the value is used for array indexing
This is wrong. Because Signed Overflow is UB the C++ compiler is allowed to just assume it will never happen, regardless of the systemic consequences. What that means is that other IR transformations will always be legal even if they wouldn't have been legal for any possible result of the overflow. This can and does destroy memory safety. Actually it would be weird if somehow the IR transformations always preserved memory safety, something they know nothing about, despite changing what the code does.
It is in the reference.
https://doc.rust-lang.org/reference/behavior-considered-unde...
I don't think it was claimed that Rust has no UB in this conversation, only IFNDR.
From what I can tell, Rust does document a set of "behavior considered undefined" like using unsafe to access misaligned pointers. For practical concerns ("could code optimization change these semantics?", "is this guaranteed to work the same on future compiler versions?") it seems reasonable to me to call that undefined behavior, and to say that Rust doesn't have much of it.
> I see the parent commenter doing this frequently and sometimes[1] even in the C++ subreddit (of all the places!). How is this not obnoxious?
Both their comment here and their reddit comment look fine to me. Something like "C++ sucks, switch to Rust!" would be annoying, but specific relevant technical comparisons ("In Rust for comparison the static growable array V isn't dropped when the main thread exits [...]") seem constructive.
> Rust requires unsafe to do a lot of things which can be done in safe code in a GC'd language. Thus, unsafe is pretty common in Rust than most GC'd languages. If a segfault can literally kill a person, it is absolutely immoral to choose Rust over Java (it does not matter that Rust "feels" safer than Java).
Java does technically have the Unsafe class for low-level unsafe operation and JNI to interoperate with C/C++/assembly.
I'd expect that the average Rust program makes more use of unsafe, but largely just because the average Rust program is lower-level (including, increasingly, parts of the Linux and Windows kernels). It's unclear to me whether the same program written in Java or Rust would ultimately prevent more bugs.
There are at least three classes of definedness of behavior:
1. The behavior of a program is defined by a spec.
2. The behavior of a program is not formally defined by a spec, either because the language has no spec or because it's imprecise, but it's defined in a sociological sense: that is, if a compiler doesn't follow the apparent meaning of the program, the people who develop the compiler will consider it a bug and you can report it to mailing lists or GitHub and probably get support.
3. The behavior is definitely undefined: a compiler can do anything in response to the program and the developers of the compiler will not consider it a bug.
C++ has a lot of 1, comparatively not a lot of 2, and a lot of 3.
Rust has none of 1, a lot of 2, and a lot of 3. But safe Rust has very little of 3.
I haven't measured but it's easy to say categorically that it's not "all" unless somehow my posts about network protocols, aeroplanes, security and psychology among others fall into this vague category.
And yes, like Ignaz Semmelweis I can see an obvious improvement to how my profession does what it does and it's infuriating that the response from many other practitioners is "No, I don't like change, therefore you're crazy for explaining why I should change"
Ignaz Semmelweis died in an asylum. But on the other hand while Ignaz was correct and his proposals would have worked he couldn't explain why because germ theory was only confirmed after he died. Rust isn't in that situation, we know already exactly what the problems are with C++. So that means I can tell you not just that using C++ is a bad idea, but why it's a bad idea.
> practitioners is "No, I don't like change, therefore you're crazy for explaining why I should change"
Who exactly are you referring to here? Your co-workers? LLVM maintainers? or the Linux kernel developers? Please be more precise.
Most of the popular Garbage Collected languages of course also have a way to escape, in some cases via an "unsafe" keyword or magic unsafe package to a language where the same safety rules do not exist, in this sense the difference in Rust is that it's the same language.
I'd actually say the more memory safe option would be a language like WUFFS where it pays a high price (generality) to deliver categorically better safety and performance. Most software could not be written in WUFFS but also most of the software which could be written in WUFFS isn't.
About 95% of the unstable features lift limitations that most people expect not to be there in the first place. I'm not aware of all too many that aren't like that.
When people say that Rust is complex, they often neglect to differentiate between implementation complexity and developer facing complexity. The implementation complexity is growing in part to support the end user simplicity. I also don't understand why anyone feels the need to know every feature of the language. You can just learn about and use the features that you need.
The majority exist in a community and have to collaborate with others. They have to deal with the code written by others, code which may use any language feature.
Every developer doing serious work will trip over every available language feature eventually.
Steve Klabnik:
“Just to provide another perspective: if you can write the programs you want to write, then all is good. You don't have to use every single tool in the standard library.
I co-authored the Rust book. I have twelve years experience writing Rust code, and just over thirty years of experience writing software. I have written a macro_rules macro exactly one time, and that was 95% taking someone else's macro and modifying it. I have written one proc macro. I have used Box::leak once. I have never used Arc::downgrade. I've used Cow a handful of times.
Don't stress yourself out. You're doing fine.“
https://www.reddit.com/r/rust/comments/1fofg43/i_am_struggli...
I have little doubt that Rust will end up being as complicated as C++ eventually, but a big difference is how explicit and well documented the discusson of new features are.
The Rust RFCs provide a ton of context to almost every feature of the lagnuage. I find that historical context extremely helpful when trying to figure out why something is the way that it is.
There may be something like that for C++, but I feel like a lot of it is "you had to be there" kind of reasons.
Searching for “Rust RFCs” reveals a git repo with thousands of markdown files describing features and their motivations with links to discussions.
Same applies to Ada, C, Modula-2, Pascal, Fortran, Algol, Cobol,.....
It's is exactly why I feel more comfortable with complexity creep in Rust as opposed to C++, since I can easily find and read the rationale for just about every feature.
Some of the public C++ stuff:
https://www.open-std.org/jtc1/sc22/wg21
Some of the public C stuff:
https://www.open-std.org/jtc1/sc22/wg14/
Some of the public Ada stuff:
https://www.open-std.org/JTC1/SC22/WG9/
And so on.
Those "in the know" have access to the stuff that is beyond that.
However that is already plenty of stuff publicly available at those https://www.open-std.org subsites as well.
> Those "in the know" have access to the stuff that is beyond that.
Yeah “you have to be there”
But these are all great!
It’s a shame they’re not as discoverable.
The point is that dealing with the Rust borrow checker is a huge pain in the ass and for most Rust applications you would have been better off just using a garbage collected language.
Maybe if you structure your code weirdly? I haven't encountered a major borrow checker issue that I couldn't easily resolve in many years.
“You are just not holding it right.”
Rust borrow checker indeed does force you to make contorsion to keep it happy and will bite you if you fail to take its particularity into account. It’s all fine and proper if you think the trade-off regarding safety is worth it (and I think it is in some case) but pretending that’s not the case is just intentionally deluding yourself.
The people here implying that the BC forces you to use a good architecture are also deluding themselves by the way. It forces you to use an architecture that suits the limitations of the borrow checker. That’s pretty much it.
The fact that such delusions are so prevalent amongst part of the community is from my perspective the worst part of using Rust. The language itself is very much fine.
> Rust is definitely hard, but after a while it's fine and you kind of get how even if using another language you still have to think about memory.
That kind of comment is a typical exemple. It’s not that you have to think about memory. You have to understand the exact limitations of the analyser Rust uses to guarantee memory safety. I don’t understand why it’s so hard to accept for some part of the Rust community.
And as the other commenter said, the borrow checker isn't all that hard to satisfy. BC complaints are often related to serious memory handling bugs. So if you know how to solve such bugs (which you need to know with C or C++ anyway), BC won't frustrate you. You may occasionally face some issues that are hard to solve under the constraints of the BC. But you can handle them by moving the safety checks from compile-time to runtime (using RefCell, Mutex, etc) or handle it manually (using unsafe) if you know what you're doing.
Like the other commenter, I find some of the complaints about programming friction and refactor to be exaggerated very often. That's unfair towards Rust in that it hurts its adoption.
The problem is most of the important problems you deal with while programming require heap allocations: i.e. a lot of Rust advice is liable to lead you astray trying to find over-complicated solutions to optimizations you probably don't need up front.
So in terms of systems programming, Rust is technically good here - these are all things you'd like to do on low level code. On the other hand if you're making a bunch of web requests and manipulating some big infrequently used data in memory...Box'ing everything with Arc is probably exactly what you should do, but everyone will tell you to try not to do it (and the issue is, if you're like me, you're coding in the "figure out what and how to do it" phase not the "I have a design I will implement optimally" phase).
If you come into Rust thinking you're going to write doubly-linked lists all day and want to structure everything like that, you're going to have a bad time.
But then in python you run into stuff like:
```
def func(list = []):
list.append(1)
```and list is actually a singleton. You want to pull your hair out since this is practically impossible to hunt down in a big codebase.
Rust is just different, and instead of writing double-pointer code, you just use flat structures, `Copy` keys, and `loop {}` and move on with your life.
Because in both forums I keep coming back to edits, and it takes forever to edit some of the things, manually. I feel like I'm being stupid or the UX of all of that is just so terrible.
I use Firefox + Tridactyl + the native extension, so with the cursor in any text field I can hit Ctrl+i and it pops up a gvim window with the contents of that text field. When you save+quit, it copies the contents back into the field.
So glad someone figured out how to do this again once Vimperator died.
sed s/^/ /
1. Copy code to clipboard.
2. From a shell prompt on macOS,
pbpaste | sed 's/^/ /' | pbcopy
Linux (Wayland), wl-paste | sed 's/^/ /' | wl-copy
Linux (X11), xclip -o -se c | sed 's/^/ /' | xclip -se c
Windows (PowerShell), Get-Clipboard | % { $_ -replace '^',' ' } | Set-Clipboard
3. Paste into HN.I've been typing out "-selection clipboard" this whole time!
use linters, they keep getting smarter
As for the example... yeah, Python is pretty terrible (for writing production codebases, I think its a great language for short-lived, one-person projects). Interesting that you mention Python because if you're considering Python and Rust for the same use case that's pretty bonkers, for anything that you might possibly have used Python to do there are many more natural choices than Rust. If you wouldn't have done it in C/C++ ten years ago, you probably shouldn't be doing it in Rust today.
I want to eventually join the "50 engines for every game" race that is rust gsme engineer development, but I'm sure not going to have the fast iteration part of design be done in Rust. The renderer and all the managers should be absolutely solid, but some parts of games need you to break stuff quickly.
Even more importantly than this, Rust has a major emphasis on backwards compatibility. The author mentions a "hamster wheel" of endless libraries, but, in Rust, nothing's forcing you to switch to a newer library, even if an old one is no longer maintained.
In general, the complexity of your project is completely up to you, and (at least to me) it seems like a lot of the new features (e.g. generator syntax) are trending towards simplicity rather than complexity.
rust, sqlite, htmx... there is a small list of techs that always get massively upvoted on hn, whatever the content or quality of the article.
IMO, it's ironic to see Rust proponents using his quote in defense of Rust (and not crediting him).
It's a numbers game. As the number of people using Rust grows, so does the number of Jerks using Rust. And it's not like the Rust community is a stranger to bullying maintainers of crates for various things.
> Async is problematic: Async Rust really is fine.
It's... OK. It has a few issues, that hopefully will get fixed, like making Pin from a generic struct into a type of reference. e.g. instead of `Pin<&str>` you would write `&pin str`.
There is also the coloring problem which is quite a hassle and people are looking into possible solutions.
> - Rust inserts Copy, Drop, Deref for you: it would be really annoying to write Rust if you had to call `.copy()` on every bool/int/char. A language like this exists, I'm sure, but this hasn't stopped Rust from taking off
One improvement here could be the IDE. I don't want to write `let s: String` every time but the IDE (neovim LSP) does show it. It'd be good if I can get the full signature too.
> Async is problematic
Async Rust is by far the best Async out there. Now when I use other languages I am essentially wondering what the hell is going on there. Is there an executor? Is there a separate thread for this? How/When is this getting executed? Async Rust doesn't execute anything and as a result you can get an idea of how the flow of your program goes (as well as pick an executor of your choice, might not seem important if you are on the Tokio bandwagon but if you are in an envrionment where you need a custom executor and you need to control CPU threads, Rust Async suddenly makes a lot of sense)
Well C++ does the same by default. You need to opt in for deep copies. C++ doesn't drop by default but modern practices like Smart pointers do.
>I'm unsure why this article is so upvoted given how vapid the content is, but it does have a snappy title, I guess.
Even HN isn't immune to the 90-9-1 rule.
As far as I know, the issue with the panics is that things panic a lot. Times when C or C++ will limp along in a degraded state and log something for you to look at will cause your Rust program to crash. That turns things that are not problems into things that are problems.
Are you claiming this from direct experience, or do you have some data to back it up? Apparently the rollback rate for Rust code in Android is less than half that of C++. [1]
1. https://security.googleblog.com/2024/09/eliminating-memory-s...
By the way, the trade you're talking about is great for desktop software (especially for browsers), but server-side software at scale is a bit different.
The borrow checker and all the Rust safety stuff is also completely orthogonal to most forms of testing. You don't get to do any less because your language protects you against a specific class of memory-related errors.
Inasmuch as I am aware, the correct usage of panic is “there is no way to recover from this without leaving the application in an undefined state”.
Not “a file I expected to exist wasn’t there” or “the user gave bad input” or “I didn’t implement this feature yet”.
But more like “a cosmic ray must have struck the machine or something, because this ought to be logically impossible.”
Or pretty much, if you literally don’t see a mechanism in Rust that can pass an error upwards and the program also cannot continue execution without the result being affected, then you panic.
That’s a little stricter than what I understand the official guidance is, but not much.
If you have something panicking it should be less “I can’t see what’s going on” and more “thank god it stopped itself so it didn’t write bogus data to production.”
Is it possible to disable this behavior? I think it might be useful as a learning tool to familiarize myself with the Traits.
Lets not forget people tend to compare 40 year old language complexity, ampered by backwards compatibility and large scale industry deployment, with one that is around 10 years old, with lots of features still only available on nightly.
The Unstable Book has an endless list of features that might land on Rust.
I only see about like 30 or so that are actual proper language additions, some of which are just exploration without even an RFC either, leaving us with about 15 or so, which really isn't that bad.
Also any language designer knows that every feature has exponential capacity due to the way it interacts with everything already in use, that is why innovation tokens are a thing in language design.
The thing about Rust abstractions is that they’re a lot more useful and forgiving than C++.
Eg: In Rust, I cannot accidentally use an option incorrectly and have the program compile. When it fails to compile, there’s a good chance the compiler will suggest how I could do what I wanted to do.
In C++, if I dereference an optional without checking it, I’ve triggered “undefined behavior”. The program will probably just segfault, but it could also return a bogus object of uninitialized memory, but technically it could overwrite my boot sector or call a SWAT team to raid my house, and still be in compliance with the C++ spec.
Thus when considering code written in Rust, I mostly need to just consider the happy path. With C++ I need to pedantically check that they used the constructs correctly, and consider many more runtime paths due to how lax the language is.
If I see someone dereference an optional without an if-guard, I now need to backtrack through the logic of that entire function to make sure the program doesn’t crash. If I see someone use a destructured value from an Option in Rust, I can rest easy that unless they used unwrap() somewhere, the compiler has done that for me.
This scales well for larger abstractions, because if I’m not actively digging into some code I can treat it more as a black box where it interacts with the code I am working with, than as a box of firecrackers that might explode if I do something unexpected with it.
Which by way is a good point, even Rust needs its clippy, so not everything is so perfectly designed to make clippy superfluous.
Aaaand you’ve lost me.
I don’t want to waste my time either setting up multiple linters or having to drill down into the pros and cons of each. If the C++ community cannot even reach a consensus on which linter it endorses, I imagine it can’t reach a consensus on what it lints, which involves even more decisions.
Secondly, both times I’ve tried to roll out or use a linter, I’ve encountered passive or active resistance from the other developers on the team.
This resistance went deeper than the linter. On one team they didn’t want to use new language constructs from the last decade, on the other team they explicitly complained about me doing things differently than 15 years ago. In both cases they rejected what I understood to be the core C++ guidelines in favor of writing their own codebase-specific coding guidelines so they could pick and choose the constructs they understood rather than trying to adhere to what might be idiomatic for a particular edition.
Unless something is 100% endorsed by the C++ community, it’s absolutely not something that I’m even going to try to champion. I’ve already been flat-out told “no one cares about your opinion” trying to explain how type-safety in C++ can improve readability in code reviews, which I thought was completely noncontroversial.
To your second point, the point of linters is to guide code to be more idiomatic; it’s not an issue of language design, but of educating humans in mostly non-functional readability and best practices.
And most of C++’s language complexity doesn’t come from “large scale industry deployment”, it comes from implementing a feature in a half-assed way, then updating it, but then the old feature needs to be kept around so all the libraries need to deal with two ways of doing things. Then something new and better comes along, and it needs to deal with three ways of doing things.
Meanwhile, developers get frustrated with how difficult the abstractions are to use, and end up carving out their own codebase-specific coding standards.
On Rust’s side, there’s 10x the emphasis on making things easy to use, so developers converge to consensus on pretty much the same modern style.
Just look at project management. Before I even write a project, with C++ I’m hit with choosing between a barrage of build systems and package managers, none of them particularly good. Will I use cmake and Conan? Then I’m stuck writing several lines of boilerplate before I even get started in a weird non-imperative language.
In Rust, I type cargo init and I’m ready to go.
C++ has basically completely fallen down when it comes to language design, and from what I’ve seen, is simultaneously in denial that things are so horrible for its end users (Bjarne Stroustrup iirc putting out an essay where he claimed “C++ is fine for any project I’m concerned about”) and suddenly trying to rush to (badly) copy features from Rust, and only recently coming to the realization that it really needs to abandon some of its fundamental precepts to stop self-sabotaging by carrying around massive amounts of baggage that nobody should be using anymore.
Meanwhile even the White House and other government agencies are saying “please use anything but C or C++”. Because ultimately, no one writes anything close to modern C++, and even modern C++’s memory safety guarantees are painfully minimal in exchange for massive amounts of code complexity (you still have to track every pointer lifetime yourself, and every safety abstraction is opt-in, so you still have to have the expertise to not cut yourself, and every codebase is unique and different in its conventions, which precludes running any kind of static analysis that could rival Rust without significant time investment).
It’s just..really bad. The only way Rust will get there is if it falls prey to the same feature hoarding as C++
But Rust already has a deprecation-and-removal process for features, as well as an edition system to provide backwards compatibility for old code, and standardized tooling for linting that’s 10x better at telling you how to refactor than anything I’ve seen with C++.
And god help you if you have an error in your code, because the C++ compiler will probably dump you out with a dozen irrelevant errors of dense template code you need to skip over, while with Rust you’ll make a mistake with lifetimes that will generate a text visualization of what you did wrong with the compiler pointing to what you need to change. Plus the Rust program will probably just run the first or second try, whereas the C++ program will segfault for the next ten minutes or so, because all the strictness in Rust means the complexity is more meaningful and less performative.
Look at what people are learning in schools today and I bet they’re still starting with new and delete or even malloc and free rather than things like std::make_unique and std::span.
The whole C++ ecosystem is basically predicated on nearly everyone having incomplete knowledge and doing something different and having to support or account for infinite combinations of features, whereas Rust has a higher barrier to entry but you can presume that pretty much everybody is using an idiomatic set of constructs for a given edition.
Anyway, sorry for beating that over your head. I started with C++ around the time of Visual C++ 6, and to me it’s absolutely shameful at how bad the language has gotten. Whole ecosystems (C#, Java) and subsequent ecosystems (Go, Rust) have arisen in response to how bad people have it programming in C++, and despite two decades of people running away from C++ to create their own general-purpose programming language, so many proponents of the language seem to still be in denial. They’ve simply shifted the goalposts from C++ being the general-purpose programming language to it being a “systems” language to rationalize why the vast majority of developers don’t use it anymore because it refused to evolve.
I see people these days comparing it to COBOL, that is, it’s not that anyone wants to use it for the merits of its language design, it simply has incumbency status that means it will be around for a long time.
Lets see how complex Rust turns out to be, if it is still around after 30 years, to actually have fair comparison.
We can also compare Rust in 2024, with the equivalent C++ version at 10 years old, when C++98 was still around the corner, C++ARM was the actual reference, and in that case the complexity fairs pretty equal across both languages.
As for safety, as someone that is a hardcover believer in systems programming languages with automatic resource management, I would rather see C++ folks embrace security than rewrite the world from scratch.
After all, Rust is in no rush to drop the hard dependency on LLVM and GCC infrastructure.
If I use a header file, as any pre-C++-20 library will (have the major compilers implemented modules yet?), I am SOL. I am specifying a text-import of that library’s code into my code. You’d need an “extern C++11”.
As for comparing them at 10 years old, apparent language size might be similar, but in terms of program complexity C++ would be DOA.
You’re telling me it takes an equal amount of time to learn these languages, but with Rust I can write code that works on the first try, while with C++ I have to account for data races and memory mistakes at every level of my program? Why do I, a 90s programmer dealing with OSes without process separation and soon the dotcom boom, want to touch that with a 10-foot pole?
Java and C# would not exist. There’d be far too little value proposition with an alternative to C++’s memory-unsafety to justify the development of a whole new language.
You’d probably see the equivalent of Python and JavaScript (probably named RustScript following the logic of the time). There’d probably be a Go equivalent developed, ie “language that compiles fast and runs almost as fast as Rust that stresses language simplicity”. Language expressiveness and simplicity are at odds with each other and there are uses of both.
To be fair, Rust was developed with the last 30 years of programming in mind. But the thing is, memory safety kept on being a central issue of the languages that followed.
The next big design issue will probably have more to do with people trying to use LLMs as a first-class programming language. Eg something that’s easy for LLMs to write and humans to read.
Or something to do with heterogenous computing resources all sharing the same “program”. However here Rust seems already positioned to do well with its compile-time tracking of asynchronous resource dependencies between threads of computation, and procedural macros that can invoke an external compiler.
So I’m not sure that conventional language design is going to change the path it’s been on for the last 30 years until the human side of that interface starts to significantly change.
Most of the language design considerations we’re discussing boil down to “make things manageable to humans with limited memory”. If cybernetic augmentation or bioengineering sharply expands those limits, I suppose it could change the direction. Otherwise it feels like things are going to naturally cluster around “complex correct thing” and “simple iterable thing” because those are the two strategies humans can use to deal with complexity beyond the scope they can keep in their head at once.
> Tokio's default choice of making `spawn` take Send/Sync futures
... combined with lack of structured concurrency.
This means async tasks look like threads in every respect, causing you to end up using Arc<> and other concurrency constructs all over the place where they ought not be necessary. This harms efficiency and adds verbosity.
I think the loglog article is a much better, nuanced, full critique of Rust.
https://loglog.games/blog/leaving-rust-gamedev/
The internet is just so full of negativity these days. People upvote titles but don't read articles. Reading about people's views on subjects is useful, but I don't think this one is.
Even comparatively, next to your own comment? I have no specific idea of what you object to or why, but I have learned that you are upset.
Rust does need a better way to do backlinks. You can do it with Rc, RefCell, and Weak, but it involves run-time borrow checks that should never fail. Those should be checked at compile time. Detecting a double borrow is the same problem as detecting a double lock of a mutex by one thread, which is being worked on.
Because it’s impossible to implement any non-trivial data structures in safe Rust. Even Vec has unsafe code in the implementation to allocate heap memory. When you need efficient trees or graphs (I doubt any non-trivial software doesn’t need at least one of them), unsafe code is the only reasonable choice.
C++ does pretty much the same under the hood, but that’s OK because the entire language is unsafe.
C# has an unsafe subset of the language with more features, just like Rust. However, it runs inside a memory-safe garbage collected runtime. Even List and Dictionary data structures from the standard library are implemented with safe subset of the language. Efficient trees and graphs are also trivial to implement in safe C#, thanks to the GC.
To name one example, the AnimationGraph in Bevy is implemented with petgraph, which is built using adjacency lists, and doesn't use any unsafe code in any of the parts that we use. It is very high-performance, as animation evaluation has to be.
Are you sure evaluating these animations is performance critical? I doubt games have enough data to saturate a CPU core doing that. Screens only have 2-8 megapixels; animated objects need to be much larger than 1 pixel.
If you animate bones for skeletal animation that’s still not much data to compute because real life people have less than 256 bones. You don’t need much more than that even if your models have fancy manually-animated clothes.
Isn't this obviously true? A key part of UI work is avoiding "jank", which commonly refers to skipped frames.
> I doubt games have enough data to saturate a CPU core doing that.
Got a bit lost here: games?
> Screens only have 2-8 megapixels.
4 bytes per pixel, 32 MB/frame. 120 frames / sec = 8 ms/frame. 3.84 GB/second.
> animated objects need to be much larger than 1 pixel.
Got lost again here.
In general, I'm lost.
First, there's a weak claim that all performant data structures in Rust must use unsafe code.
I don't think the author meant all performant data structures must use unsafe code.
I assume they meant "a Rust data structure with unsafe code will outperform an equivalent Rust data structure with only safe code"
Then, someone mentions a 3D renderer, written in Rust, is using a data structure with only safe code.
I don't understand how questioning if its truly performant, then arguing rendering 3D isn't that hard, is relevant.
To an extent sure, but we’re talking about low level micro-optimizations. Games don’t animate individual pixels. I don’t think animating 1000 things per frame gonna saturate a CPU core doing these computations, which means the code doing that is not actually performance critical.
> Got a bit lost here: games?
I searched the internets for “Bevy Engine” and found this web site https://bevyengine.org/ which says “game engine”. I wonder is there another Bevy unrelated to games?
> 3.84 GB/second
In modern games none of that bandwidth is processed on CPU. Games use GPU for that, which don’t run Rust.
> there's a weak claim that all performant data structures in Rust must use unsafe code
Weak claim? Look at the source code of data structures implemented by Rust standard library. You will find unsafe code everywhere. When you need custom data structures instead of merely using the standard ones you will have to do the same, because safe Rust is fundamentally limited in that regard.
This is survivorship bias: one of the criteria back in the day for “should this go in the standard library” was “is it a data structure that uses a lot of unsafe?” because it was understood that the folks in the project would understand unsafe Rust better than your average Rust programmer. These days, that isn’t as true anymore, but back then, things were different.
Oh, my sweet summer child. :)
> In modern games none of that bandwidth is processed on CPU. Games use GPU for that, which don’t run Rust.
So is your claim that OP is making up stuff about running code on the CPU because its a 3D engine?
Also, why mention megapixels if you think it's irrelevant? :)
> Weak claim?
"_all_ performant data structures in Rust _must_ use unsafe code" is a long tail reading of the original comment. If that was the intent, it is a weak claim, because we can observe many memory-safe languages and runtimes and have performant data structures. (minecraft was written in Java, years and years ago!)
> Look at the source code of data structures implemented by Rust standard library.
This is the bailey, which was directly covered in the previous comment.
The motte was "all performant data structures in Rust must use unsafe code"
Here, the bailey, steelmanning as strongly as possible, is "data structures with unsafe code are more performant than ones without", which was directly said in the comment you are replying to.
In addition to the swapping, this is a picture-perfect replication of the bomber with holes on it meme, as the other reply notes.
I’m not sure CPU-evaluated animations are often critical performance bottlenecks of game engines.
> why mention megapixels if you think it's irrelevant?
Count of screen-space pixels is a hard upper limit of count of simultaneously visible things on the screen. Animating occluded meshes is pointless.
> many memory-safe languages and runtimes and have performant data structures
Indeed, because people who designed these memory-safe languages wanted to support arbitrarily complicated data structures implemented using safe subset of these languages.
Safe Rust supports arbitrarily complicated code, but requires unsafe to implement most non-trivial data structures.
It’s possible to hide unsafe code deep inside standard library, possible to implement safe APIs over unsafe implementations, but still, for people who actually need to create custom data structures (as opposed to consuming libraries implemented by someone else) none of that is relevant.
Beyond that, you only need unsafe code for specific kinds of data structures. At least in my experience.
This might be fine for code which consumes data structures implemented by other people. The approach is not good when you actually need to implement data structures in your program.
In modern world this is especially bad for a low-level language (marketed as high performance, BTW) because the gap between memory latency and compute performance is now huge. You do need efficient data structures, which often implies developing custom ones for specific use cases. This is required to saturate CPU cores as opposed to stalling on RAM latency while chasing pointers, or on cache coherency protocol while incrementing/decrementing reference counters from multiple CPU cores.
Interestingly, neither C++ nor C# has that boundary, for different reasons: C++ is completely unsafe, and safe C# supports almost all data structures (except really weird ones like XOR linked lists) because GC.
You are assuming you can't do this with those vecs/maps. But you can! That's what the "additional semantics" are.
They will be slightly slower due to often using indexes instead of raw pointer, which requires a bound check and an addition to get the pointer, and sometimes a reallocation, but they won't be that slow. Surely they will be faster than C#, which you claim can implement those same data structures efficiently. You also often get the benefit of better cache locality due to packing everything together, meaning it could even be faster.
This is not a fundamental requirement, though. Assuming arena-like behavior, the index will be constructed/provided by the arena itself, so the bounds check can be safely ellided by-construction. Reallocation cost of the entire arena could be expensive, but if that is a cost you'd want to ammortize down to a new allocation, the arena could be implemented as an extensible list of non-growable arenas: every time an arena is full, you append another. This can be an issue if you don't keep track of deletions/tombstones or can't afford a compaction step to keep memory usage down, but in practice having all of these requirements at once is not as common.
If you need an array for your custom data structure, a standard library vector is almost always good enough. Associative arrays are a bit more tricky, but you should be able to find a handful of map implementations that cover most of your needs. And when you need a custom one, you can often implement it on top of the standard library vector.
When I’m happy with the level of performance delivered by idiomatic C++ and standard collections, I tend to avoid C++ all together because I also proficient with C# which is even faster to write and debug.
But sometimes I want more performance. An example from my day job is a multi-step numerical simulation which needs to handle grids of 200M nodes. When processing that amount of data, standard collections are suboptimal. I’m not using std::vector because I don’t need them to grow and I want to allocate these huge buffers bypassing C heap i.e. page-aligned blocks of memory zero initialized by the OS.
A simple use case is a 2D array where rows are padded to be multiples of 32 bytes (size of AVX SIMD vectors, saves a lot of implementation complexity because no need to handle remainders) or 64 bytes (saves a tiny bit of performance when parallelizing, guarantees cache lines aren’t shared between rows).
When element size is not a power of 2, it’s impossible to implement that RAM layout on top of an std::vector
I also had one of those, in an application that created and deleted a large number of fixed-size arrays across many threads. A naive implementation using glibc malloc ended up with a massive memory leak caused by hundreds/thousands of fragmented arenas. A thread could usually not reuse the memory it had just freed, because it was using a different arena. And because the arena was not empty, it was not possible to unmap the memory.
I don't get it - why are we ignoring the fact that C# necessarily implies distributing CLR? NativeAOT doesn't work for everything.
There are plenty of use cases where NativeAOT works perfectly fine, and is getting better with each .NET release.
This text
> When I’m happy with the level of performance delivered by idiomatic C++ and standard collections, I tend to avoid C++ all together because I also proficient with C# which is even faster to write and debug.
very strongly implies that they're completely interchangeable. They're not. It's as simple as that.
Sometimes it's hard to keep track of whether I work in software or politics.
Not completely but they are interchangeable. C# at its inception was as much inspired by C++ as it was by Java. Since then, it only further evolved to accommodate far more low-level scenarios and improve their performance.
It provides the kind of capabilities you'd usually expect from C++, so this statement holds true. Calling C exports is one `static extern ...` method away, you can define explicit layout for structs (that satisfy 'unmanaged' constraint), you have fixed arrays in structs, stack buffers, pointers and ability to do raw or abstracted away manual memory management, etc. You can mmap device-shared memory and push data into GPU. You can accept pointers from C or C++ and wrap them into Span<T>s and pass those to most standard library methods.
I'll just ask you this extremely simple question: does C# compile/run/whatever-magic-you-think-it-does on targets that aren't in {x32, x86, ARM64}? Does it target RISC-V? Does it target PTX? Does it target AMDGPU? Are you getting the picture?
Pre-empting the most low-brow dismissal: these are only niche targets if you've never heard of a company called NVIDIA.
I have personally shipped embedded Linux software running on 32 bit ARMv7 SoC written mostly in C#. The product is long in production and we’re happy with the outcome.
> Does it target RISC-V?
Not sure if it’s ready for production, but see that article: https://www.phoronix.com/news/Microsoft-dotNET-RISC-V
> Does it target PTX?
According to nVidia, the answer is yes: https://developer.nvidia.com/blog/hybridizer-csharp/
> Does it target AMDGPU?
Not sure, probably not.
> It's coming from a Samsung engineer, Dong-Heon Jung, who is involved with the .NET platform team and works on it as part of his role at Samsung.
answer: no
> According to nVidia, the answer is yes: https://developer.nvidia.com/blog/hybridizer-csharp/
> Dec 13, 2017
answer: no
> Not sure, probably not.
correct, the answer is no.
again: this isn't politics, this is software, where the details actually matter.
OK, here’s a newer project which does about the same thing i.e. compiles C# to PTX https://ilgpu.net/ BTW, it supports OpenCL backend in addition to CUDA.
> Sept 2023.
you guys just don't get it - there's a reason why CUDA is a dialect of C/C++ and not C# and it's not because the engineers at NVIDIA have just never heard of C#.
Yes, Linaro is doing work for managed runtimes on RISC-V, although it remains questionable how much RISC-V matters outside nerd circles.
Ironic because "via partners" is equivalent to "doesn't matter".
The work is underway: https://github.com/dotnet/runtime/pulls?q=label%3Aarch-riscv
> Nvidia's PTX, AMD
https://ilgpu.net/ and even https://github.com/m4rs-mt/ILGPU/blob/c3af8f368445d8e6443f36...
While not PTX, there's also this project: https://github.com/Sergio0694/ComputeSharp which partially overlaps with what ILGPU offers
Arguably, even C++ itself - you are not using "full" C++ but a special subset that works on top of specific abstraction to compile to GPUs, and I was told that CUDA C++ is considered legacy.
The original context of discussion is performance and perceived issue of "having runtime", which is what my reply is targeted at. In that context, C# provides you the tools and a solution other languages in the class of Java, Go, TS and anything else interpreted just don't have. So you could reasonably replace a project written in C++ that requires assurances provided by C++ with C#, and possibly re-apply all the freed-up developer productivity into further optimizations, but you wouldn't be able to do so with the same degree of confidence with most other originally high-level languages. Another upcoming contender is Swift.
you're wrong - GPU offloading is just window dressing (#defines and CMake) around the compiler itself, which supports almost all of C++; see https://libc.llvm.org/gpu/ which builds libc (which is implemented using C++ in llvm) to amdgpu/ptx/etc.
> C# provides you the tools and a solution other languages in the class of Java, Go, TS
absolutely no one in their right mind would compare these languages to C++
> So you could reasonably replace a project written in C++ that requires assurances provided by C++ with C#
i think people that don't write C++ professionally just don't understand where/how/why C++ is used :shrug:
For that amount of data, both internet bandwidth and disk space are rather cheap these days?
This is why I think rust is great for application programmers. Honestly high performance web servers, games, anything in that realm it's pretty good for.
Low level systems and high performance data structures are better implemented with direct memory management, but the consumers of those libraries shouldn't have that same burden (as much as possible).
Granted, I haven't worked with rust enough to run into issues with other people's unsafe code.
>Because it’s impossible to implement any non-trivial data structures in safe Rust. Even Vec has unsafe code
Hmm.. wasn't memory safety the main selling point for rust? If not the only. Now mix of two languages looks even worst than one complex. Especially taking into account that it can be paired with safe language from long list. Don't know what rust fans are thinking, but from outside it doesn't look very attractive. Definitely not enough to make a switch. Julia looked better at first, but turned out to be just a graveyard of abandoned academic projects.
The point is that the vast majority of code doesn't have to be unsafe. Empirically, Rust code has far fewer memory safety problems than non-memory-safe languages.
The vast majority of Rust programmers aren't spending their time re-implementing core data structures like Vec, so memory safety (and the fact that library authors can build safe abstractions on top of unsafe code, which is impossible in C or C++) still benefits them.
The fundamental / syntactic promise of Rust is providing mechanisms to handle and encapsulate unsafety such that it is possible to construct a set of libraries that handle the unsafety in designated places. Therefore the rest of the program can be mathematically proven to be safe. Only the unsafe parts can be unsafe.
Coming from Java or Go or Js or Python angle wouldn't be the same. Those languages don't come with mechanisms to let you to make system calls directly or handle the precise memory structure of the data which is necessary when one is communicating with hardware or the OS or just wants to have an acceptable amount of performance.
In C++, the compiler can literally remove your code if you sum or multiply integers wrong or assume the char is signed/unsigned. There is no designated syntax that limits the places possible memory overflow error happen. The design of the language is such that some most trivial oversight can break your program silently and significantly. It is too broad so it is not possible to create a safe and mathematically proven and performant subset with the C and C++ syntax. It is possible with Rust. It is like the difference of chips that didn't have a hardware mechanism to switch between user and kernel mode so everything was simply "all programs should behave well and no writes to other programs' memory pinky promise".
Rust doesn't leave this just as a possibility. Its standard library is mostly safe and one can already write completely safe and useful utilities with the standard library. The purpose of the standard library is provide you ways to avoid unsafe as much as possible.
Of course more hardware access or extremely efficient implementations would require unsafe. However again, only the unsafe parts can cause safety bugs. They are much easier to find and debug compared to C++. People write libraries for encapsulating unsafe so there are even less places that use unsafe. If people are out of their C++ habit, reaching for the big unsafe stick way too often, then they are using Rust wrong.
Whatever you do, there will be always a need for people and software that enables a certain hardware mode, multiply matrices fast, allocates a part of display for rendering a window etc. We can encapsulate the critical parts of those operations with unsafe and the rest of the business logic can be safe.
Here’s a C# library for Linux where I’m doing all these things https://github.com/Const-me/Vrmac/blob/master/VrmacVideo/Rea... As you see from that readme, the performance is pretty good too.
Here is an interesting case of an optimization-triggered bug in Rust code I've heard of: https://www.youtube.com/watch?v=hBjQ3HqCfxs
Eh? You can of course do all of that in python. https://docs.python.org/3/library/struct.html
I agree. I rarely ever use unsafe, and only as a last resort. Unsafe code is really not needed to achieve high performance.
> Rust does need a better way to do backlinks. You can do it with Rc, RefCell, and Weak, but it involves run-time borrow checks that should never fail.
I think this will basically turn into provably-correct data structures. Which is possible to do, and I've long thought there should be systems built on top of Rust to allow for proving these correct. But we should be clear that something like Idris is what we're signing up for. Whatever it is, it is assuredly going to be far more complex than the borrow check. We should basically only use such systems for the implementations of core data structures.
That's kind of what I'm thinking. The basic idea is to prove that .upgrade().unwrap() and .borrow() never panic. This isn't all that much harder than what the borrow checker does. If you have the rule that the return value from those functions must stay within the scope where they are created, then what has to be proven is that no two such scopes for the same RefCell overlap. Sometimes this will be easy; sometimes it will be hard. A reasonable first cut would be to check that no such scopes overlap for a specific type, anywhere. That's rather conservative. If you can prove that, you don't need the checking. So it's an optimization.
I hear cargo-geiger is useful identifying such crates.
> I just do not see why people seem to use "unsafe" so much.
Because it's:
A) fast (branchless access)
B) fast (calling C libs or assembly)
C) fast (SIMD)
D) people think unsafe Rust is easier
Want to write a JSON parser that will gobble up gigabytes per second? Your only way is removing as many branches and use assembly as much as possible. Doubly so on stable! I guess the same goes for making a "blazingly fast"™ graphical stack.
People that think unsafe is easier, shouldn't be writing unsafe code. Writing unsafe code correctly is like juggling burning chainsaws. Saying that's easier than using chainsaws is moronic at best.
EDIT: Consider following, if each of your unsafe {} blocks doesn't contain a
// SAFETY:
// Detailed explanation of invariants
One of the chainsaws just cut off your leg.If that's their line of thought, I don't know why they simply don't use C/++. Or even C# with unsafe blocks. I know you said the same, but I wanted to reiterate that.
But yes, I can see a few very specific cases where unsafe access is needed. Emphasis on "few". I think anything past some fundamentals should be at best a late optimizing step after an MVP is established.
I also think, if it's not already there, that crates should be able to identify if it's "safe" or "unsafe". Same mentality where I'd probably want to rely on safe crates until I need to optimize a sector and then look into blazing fast but unsafe crates.
It's not clear to me how rustc could detect a dangling backlink in a tree structure at compile time. Seems impossible short of adding proofs to the type system.
Unfortunately it only thinks about are unique, an opinionated form of borrowed ownership, and a little bit about shared (weak I think is punted entirely to the library?), and not all other ownership policies can effectively be implemented on top of them.
The usual thing approach would be:
given two types, P and C (which may be the same type in case of homogeneous trees, but this), with at least the following fields:
class P:
child1: ChildPointer[C]
class C:
parent: ParentPointer[P]
Then `p.child1 = c` will transparently be transformed into something like: p.child1?.parent = null
c?.parent?.child1 = null # only needed if splice-to-steal is permitted; may need iteration if multiple children
p.child1 = c
p.child1?.parent = p
Note that ChildPointer might be come in unique-like or shared-like implementations. ParentPointer[T] is basically Optional[UnopinionatedlyBorrowed[T]].A have a list of several other ownership policies that people actually want: https://gist.github.com/o11c/dee52f11428b3d70914c4ed5652d43f...
I think that Rust having unsafe is fine as there will be edge cases where the compiler can't quite work out whether some code will work fine or not and where the programmer can vouch for it. That's no problem.
The issue I have with it is that the unsafe block then gets kinda swallowed by the supposedly safe wrappers around it. And there's no clear auditable trail back to it. I find that a bit surprising as I'd expect it obvious to have some kind of 'uses unsafe' declaration (annotation? I don't know what Rust calls those statements with a number sign and square brackets in front of a subroutine) which would then propagate upwards. A bit like how in Java (the most elegant and refined of all programming languages) you can use an annotation to state that a function 'throws XYZException' which then needs to be propagated up to a point where it can get handled.
Not having such a mechanism feels a bit icky to me. It's like if there's spiders crawling out of your ears it's useful to know that they're coming out of your ears so you don't have to wonder 'are those spiders creeping out of my ears? Or out of my nose? Or out of my eyelids?', which would be a bit inconvenient.
[1] https://doc.rust-lang.org/std/slice/fn.from_raw_parts.html
SIMD seems to be a big one.
Backlinks would be "nice" but they break fundamental assumptions that the borrow checker does.
> Detecting a double borrow is the same problem as detecting a double lock of a mutex by one thread, which is being worked on.
Is it being worked on using heuristics or formal methods?
I don’t know what fancy things you’re doing with unsafe that you’re seeing it on a daily basis… maybe it’s a you problem
> I predict that tracing garbage collectors will become popular in Rust eventually.
The use of Rc is already very widespread in projects when people don't want to deal with the borrow checker and only want to use the ML-like features of Rust (Sum types, Option, Error etc.)
> Rust has arrived at the complexity of Haskell and C++, each year requiring more knowledge to keep up with the latest and greatest.
I wonder when we will see the rise of Haskell like LanguageExtensions in Rust. AFAIK, pretty much everybody uses things like GADT, PolyKinds, OverloadedStrings etc. The most similar thing I can think of Rust right now for is python-like decorator application of things like builder macros using Bon.
> Async is highly problematic
Agreed. Tokyo is the only reason, I think, anybody is able to use Rust for this stuff.
And the fact that this hasn't caused alarm is kind of an issue.
The problem with that is Reference Counting is WAY slower than good Garbage Collectors on modern CPUs. Reference Counting breaks locality, hammers caches and is every bit as non-deterministic as a garbage collector.
No, it doesn't. If you naively express graphs containing cycles with `Rc` you will leak memory, just like you would with `std::shared_ptr` in C++.
No, but Gc will not resolve the core problem either. The core problem is that rust forbids two mutable pointers into one chunk of memory. If your tree needs backlinks from child nodes to parents, then you are out of luck.
I sometimes wish I could have a mode of Rust where I had to satisfy the lifetime rules but not the at-most-one-mutable-reference-to-an-object rule.
So Rust as the language makes it impossible to do with &-pointers, while standard library of Rust allows it to do with combination of Option, Rc, RefCell but it is really ugly (people above says it is impossible, but I believe it is just ugly in all ways). Like this:
type NodeRef = Rc<RefCell<NodeInner>>;
struct Node { parent: Option<NodeRef>, left: Option<NodeRef>, right: Option<NodeRef> }
So the real type of `parent` field is Option<Rc<RefCell<NodeInner>>>. I hate it when it comes to that. But the ugliness is not the only issue. Now any attempt to access parent or child node will go through 2 runtime checks: Option need to check that there is Some reference or just None, and RefCell needs to check that the invariant mut^shared will not be broken. And all this checks must be handled, so your code will probably have a lot of unwraps or ? which worsens the ugliness problem.
And yeah, with Rc you need to watch for memory leaks. You need to break all cycles before you allow destructors to run.
If I need to write a tree in rust, I'll use raw-pointers and unsafe, and let allergic to unsafe rustaceans say what they like, I just don't care.
Using SlotMap and integer ids, etc. doesn't I think offer any advantage.
Is it mutability that's tripping you up? Because that's the only gotcha I can think of. Yes, you won't get mutability of the content of those references unless you stick a RefCell or a Mutex inside them.
It's verbose, but it's explicit, at least.
So:
struct Node {
parent: Rc<RefCell<Node>>,
left: Option<Rc<RefCell<Node>>>,
right Option<Rc<RefCell<Node>>>,
}
and just off the top of my head it'd be something like {
let my_parent = my_node.parent.borrow_mut();
... do stuff with my_parent ...
}
... my_parent drops out of scope here, now others can borrow ...
etc.Haven't tried this in compiler my memory might not be right here.
https://x.com/LParreaux/status/1839706950688555086
... this is what they're talking about.
(I know the tweet is about the "idiomatic" answer to this problem, which is to replace references with indices into flat data structures).
Ergonomics aren't great for this type of problem, but it's something I almost never run into. Feels like a cooked up example. I've written tree data structures, etc. and never had much issue. ASTs for compilers, no particular drama.
Rust is just making you consider whether you really want to do this, is all.
A lot of problems related to Tokyo can be avoided if you think your code as structured concurrency and avoid using Tokio::spawn. However, too often this is not possible.
A lot of people seem to assume that "C++ is complex" is referring to how the committee adds new language features every 3 years. The conventional wisdom that C++ is wickedly difficult to learn is NOT about "oh man, now I need to learn about the spaceship operator?" C++ is an almost unfathomably bottomless pit. From the arcane template metaprogramming system to the sprawling byzantine rules that govern what members a compiler auto-generates for you, and on to the insane mapping between "what this keyword was originally introduced for" and "what it actually means here in _this_ context, there is no end to it. Keeping up with new language syntax features is an absolute drop in the bucket compared to the inherent complexity required to understand a C++11 codebase, build it (with a completely separate tool that you must choose) and manage its dependencies (with a yet different completely separate tool that you must choose).
You don't have to know anything about Rust to know that saying "Rust has become complex as C++" is objectively incorrect.
I think it's misleading to say that Rust distinguishes mutability. It distinguishes _exclusivity_. If you hold a reference to something, you are guaranteed that nothing else holds an exclusive reference to it (which is spelled &mut). You are _not_ guaranteed that nothing accessible through that reference can change (for example, it might contain a RefCell or other interior-mutable type). Shared references (spelled &) usually imply immutability, but not always. On the other hand, if you hold an exclusive reference to something, you are guaranteed that nothing, anywhere, holds any kind of reference to it.
IMO, the fact that exclusive references are spelled "&mut", and often colloquially called "mutable references", was a pedagogical mistake that we're unfortunately now stuck with.
You couldn't just roll out a change like alias &e to &mut and &s to &, and then have a compiler warning for using the old &mut or &?
This is an aha moment as I read it. The complexity of your tools must be paid back by the value they give to the business you’re in.
Depending on the situation, memory layout could be trivial (copying 200 bytes once at startup vs. not in a way that should never be user-perceptible and difficult to even measure) or actually a big deal (chasing down pointers upon pointers in a tight inner loop). It's entirely situational. To dismiss all of that as "trivial" and saying it will "never" make a difference is not helpful. There are a lot of shitty apps that are impossible to get running reasonably without a total rewrite and their shitty use of memory is part of that.
Through many decades people wrote utilities and applications in C. Not hardcore lower-level kernel modules. Just utilities and applications. Because that’s what they wanted to write them in. Over Perl or Java or whatever else the alternatives were.
What’s more C than that? Writing everything short of scripts in it?
Now people write applications in a modern programming language with modern affordances. They might be working uphill to a degree but they could have chosen much less ergonomic options.
The embarrassing part is criticizing people who have put in the work, not on the merits of their work, but on… having put in too much work.
I don't get how someone can criticise a systems programming language by saying "I have to think about memory layout"....
> I feel like Rust is self-defined as a “systems” language, but it’s being used to write web apps and command-line tools and all sorts of things.
> This is a little disappointing, but also predictable: the more successful your language, the more people will use your language for things it wasn’t intended for.
> This post still offends many who have tied Rust to their identity, but that’s their problem, not mine.
And let's not forget that Word 97 felt bloated in its day, however fondly we may look back on it now.
For applications that don't need high performance for the CPU code and aren't systems code, sure, Rust may not be a good choice. I'm not writing the server-side code for low traffic Web sites in Rust either.
This isn't meant to be an exhaustive list--it's just the domains I have experience with that Rust was a good fit for. Lest it seem like I'm saying Rust is a good fit for everything I've done, I also worked on Firefox where the UI was JavaScript, and I wouldn't hurry to rewrite that code in Rust. Nor would I want the throwaway stuff I write in Python or Ruby to be Rust.
Did you leave out a "not" here?
My guess was that since almost no one will pay more for a game's having fewer security vulns, there is less benefit to incurring the expense of Rust (takes longer to learn, development speed is slightly less)
For example I like to play Civ with a friend, but stopped because about once every 30 minutes one of us would have their game crash. If it was written in Rust, I assume it might be more stable.
That said, I don't really think Rust is a good choice for CRUD apps, because development velocity is more important than performance and they probably don't need to be multithreaded anyway.
But Rust would have been great for a lot of "systems" stuff that was historically written in Java, like Flink or Hadoop for example.
The point of Rust is to be a language that competes in the same niche as C++ but makes it much more difficult to write large classes of bug, much broader than just "security vulns".
Before you start replying with "Rust introduced X" - ask yourself - is X extending an existing feature slightly or does it introduce an entirely new concept?
> Rust has arrived at the complexity of Haskell and C++, each year requiring more knowledge to keep up with the latest and greatest. Go was designed as the antidote to this kind of endlessly increasing language surface area.
Yes, Rust learning curve is much steeper and the "language surface area" is big, but it's not changing much in recent years. Go getting generics is much bigger change than anything that Rust got in a long while.
My negative views on Rust - https://news.ycombinator.com/item?id=29659056 - Dec 2021 (89 comments)
Nevertheless, C++ has even worse problems. When your alternative is using C++, that's the time to consider Rust.
Python is a bit like that. It is a top choice for a few things, and ends up used effectively for other things, simply because it is versatile. But then people run into issues with dynamic typing, zero static safety, start slapping type annotations everywhere... and bemoan Python as a bit of a horror show.
My use case for Rust is complementing Python, where frankly I don't care about half the complex features, still it's nicer than C or C++ to code in. The complexities of the borrow checker are more like a tax to me. I understand people who are frustrated with it thought, as otherwise they see it as a bit of a perfect language.
Same goes for machine learning. The ML folks at the start couldn't be bothered to learn something the least bit sophisticated. Some things are getting better. You don't need conda anymore, tensorflow wheels are out, and at least instead of shippong around .pt, checkpoint, or pickle files at least we use safetensors, but there's still python shit around, like jinja2 templates for conversations, etc.
Anyways if we want good things we need to get better about getting unstuck from these local minima.
It has a number of features (often stemming from the above) that make it a PITA for other applications, even when it would be fairly well suited to them otherwise. What I would give for proper static typing for production Python! Alas, that's not coming, and Rust's occasionally mind boggling borrow checker is there to stay.
And anyways you probably shouldn't be serving a website with a scripting language (php and perl are great examples of why not). You probably shouldn't be deploying ml with a scripting language (maybe training is fine, if you're not doing distributed training). You probably shouldn't have too many core os components in a scripting language lookin at you Ubuntu, you probably shouldn't write a cloud (openstack) using a scripting language. What happened to "right tool for the right job".
My point is, it's great at its niche and good but kind of clunky at other things it ends up used for. At its off-niche applications, it attracts a lot of criticism - like you talking about how it's a bad idea to serve a webpage from a scripting language or productionise ML models.
No one complains that C++ is clunky at, dunno, serving websites, simply because it is so ill suited for it that no one ever tries. But Rust is less obviously a bad choice. Still that attracts people who find it clunky when they "just" want to serve a webpage and don't want to deal with the vagaries of borrow checker.
I think there is even a (gross) way to achieve try/catch around a block of code that panics?
whereas Error is for things that are unlikely to fail, like network/filesystem requests and recoverable logic bugs.
> People don't want "to have to play Chess against the compiler"
Things that are easy to express in other languages become very hard in Rust due to the languages constraints on ownership, async...
> Rust has arrived at the complexity of Haskell and C++, each year requiring more knowledge to keep up with the latest and greatest.
It's indeed hard to keep up.
> Async is highly problematic.
Yes, more complexity, more fragmentation and feel like another language.
But one point feels unfair:
> the excellent tooling and dev team for Rust [..] pulls the wool over people’s eyes and convinces them that this is a good language that is simple and worth investing in.
What? No. The main appeal was the safety. It's still a distinctive feature of Rust. To almost eliminate a whole class of safety issues. It has been proven by several lengthy reports such as https://security.googleblog.com/2024/09/eliminating-memory-s....
They are many projects for which the reliability and efficiency are worth the trouble.
But yeah, rust is very much a systems language: so it will be forcing you to think about memory layout one way or the other. Idk how valid of a complaint that is when you really consider that, and specifically the other alternatives you have.
This isn't a criticism of Rust, but rather of the framing we often use to compare Rust and (say) Python or Java.
If such considerations are natural for your problem domain, you likely do "systems programming" and also happen know C, have an.opinion on Zig, etc.
If such considerations are the noise which you wish your language would abstract away, then you likely do "application programming", and you can pick from a wide gamut of languages, from Ruby and Python to Typescript to Elixir and Pony to C#, Java, and Kotlin to OCaml and Haskell. You can be utterly productive with each.
As for the rest of your list - I'm not sure why rust is special in regards to "the specific time you grab and release resources" or "inter-thread interactions". Seriously - I have to think about when I acquire and release resources like sockets and file handles python, c, ocaml, java, c#, and every other language I've used. Its not like you can randomly call file.close() or close(fd) (in python and c respectively) and expect to be able to continue reading or writing the file later. Same for inter-thread interactions... all those languages have mutexes too. Like none of that is rust-specific, its just the consequence of using resources and threads in code.
Here are some of the typical concerns:
- How exactly fields of a record sit in memory, how much room do hey take, taking into account things like padding for aligned access?
- Are related data sit next to each other, and can stay in the CPU cache together while needed?
- Are chaotic memory accesses thrashing the cache?
- Do the data structures avoid gratuitous references / pointer chasing (at least mostly)?
- Are local variables mostly allocated on the stack?
- Does your code avoid heap allocations where possible?
- Do fields in your data structure match a binary format, such as of an IP packet, or a memory-mapped register?
- Are your sensitive data protected from paging out to disk if free RAM is exhausted?
If these questions are not even relevant for your problem domain, you likely are not doing systems programming.
Even for memory, a huge amount of the rust I write isn't performance code - I don't understand why it's a mental burden to write
let x = vec![a, b, c];
When the equivalent in python is:
x = [a, b, c]
Nothing about either requires a lick of memory allocation thought, nor about memory layout. Sure in rust I have to think about what I'm going to do with that vec in rust and the mutation story up-front, but after enough lines of python I have also learned to think of that up front there too (ortherwise I know I'm going to be chasing down how I ended up mutating a copy of the list rather than the original list I wanted to mutate - usually because someone did list = $list_comprehension somewhere in the call stack before mutating).
I'm not being disingenuous here - I literally don't understand the difference, it feels like an oddly targetted complaint about things that are just what computer languages do. To the best of my ability to determine the biggest differences between the languages aren't about what's simple and complex, but how the problems with the complex things express themselves. I mean it's not like getting a recursion limit error in python on a line that merely does "temp = some_object.foo" is straight-forward to deal with, or the problems with "for _, x := range foo { func() { stuff with x } }" are easy to understand/learn to work with - but I don't see people running around saying you shouldn't learn those languages because there's a bunch of hidden stupid crap to wrap your head around to be effective. (and yes, i did run into both those problems in my first week of using the languages)
In all the languages there are wierd idioms and rules - some of them you get used to and some of them you structure your program around. Sometimes you learn to love it, and sometimes it annoys you to no end. In every case I've ever found it's either learn to work with the language or sign up for a world of pain, but if you choose the former everything gets easier. When a language makes something seem hard, but it seems easy in my favorite language, well, in that case I've discovered the complexity is there in both, but when it's hidden from me it limited my ability to see a vast array of options to explore and shown a whole new set of problem solving tools at my disposal.
I still don't know what people mean when they talk about "having to think about memory layout"... like seriously to me it's: Thinking about pointer alignment and how to cast one struct into another in C... something I've only had to think about once in any language across a fairly wide range of tasks. If this is what's being referred to, I'm baffled about how it's coming up so much, but i suspect this isn't what people mean, and I don't know what they actually do mean.
The best example I'd give is the degree to which you have to ask yourself if you want to use String or if you want to use &str--is this struct, or this function, going to own the string or borrow it from somebody else? If you're borrowing it, who is owning it? Can you actually make that work (this is really salient for parser designs)?
Essentially in Rust, before you can really start on a large project, you have to sit down and plan out how the memory ownership is going to go, and this design work doesn't really exist in most other languages. Note that it's not inherently a good thing or a bad thing, but it is a potential source of friction (especially for small projects or exploratory tools where memory ownership might want to evolve as you figure out what needs to happen).
In practice, there isn’t a ton of thinking to do about this: if it’s a struct, you want String. If it’s a function parameter, you want &str, and if it’s a function’s return type, you want String. Doing that until you have a reason not to is the right call 95% of the time, and 4% of that last 5% is “if the return type is derived from an argument and isn’t changed by the body, return &str.
It does take some time when you’re learning, but once you get over the hump, you just don’t actively think about this stuff very much. Google’s research says Rust is roughly as productive as any other language there, so far. That doesn’t mean it’s universally true, but it’s also some evidence it’s not universally false.
In GCed languages strings get allocated once when you need them, get referred to wherever you want, however many times you want (with no new calls to the allocation subsystem), and are freed once when you don't need them anymore, with minimal thought or intervention on the programmers part. Rust is absolutely not like this at all.
I agree with the overall idea that people exaggerate the difficulty of Rust, but come on, this is an exaggeration way too far in the other direction.
I rarely type clone(). Even with this advice, you won’t clone super often. And when you do, it’s a signal sometimes that maybe you can do better, but it’s just not a big cognative burden.
I think a subtextual problem for Rust advocacy is that the places where it's a clear win are a small subset of all software problems, and that space is shrinking. Rust would, in that view of the world, be a victim of its success: it's the best replacement we have for C/C++ today, but the industry has moved sharply away from solving problems that way, and sharply towards solving them with Javascript.
(Deno is doing something smart here.)
As in, people don't realize being a "systems programming language" is extremely difficult to get right, and many languages simply can't handle that (and never will, as per internal design requirements unique to those languages); if a language gets that right, they're going to get everything else right too if people decided to use it for that.
Again: this is about the term, not about the language. I don't think it's controversial to suggest that there is no one ur-language that is optimal for every problem domain!
Even more so the fools that given them money for such products.
/s
it's hard to learn so we shall see what kind of niche it can carve for itself, but it's fine
And as long as Rust remains popular, this is why we will witness endless complaining about it. Most devs are lazy, and would rather sweep complexity under the rug and pretend it doesn't exist until it becomes a real problem they can't ignore anymore. That's fine. But no need to be so vocal about it. At this point, people whining about Rust is more of a trope than people proselytizing it.
You mean pragmatic. Not all of us are memory absolutists. The time ideally invested in memory management really depends on the problem space, the deadlines, etc.
It's the opposite for me. I would put more effort into Rust, but I'm not going to invest in learning how to write safe rust if my libraries are built on unsafe.
This is common pattern reminds me cross-fit/veganism/i-use-arch/etc. Almost like an echo.
> In practice, people just want to be able to write a tree-like type without having to play Chess against the compiler.
Sure, Rust's strong encouragement of tree-structured ownership may be annoying when you try and make a spaghetti ownership soup, but it's not like it doesn't have upsides. Many people have written about how the ownership & borrowing rules lead to code structure that has fewer bugs.
> I think that if you rewrite anything from scratch with performance in mind, you’ll see a significant performance improvement.
Maybe, but I think this is missing the point. The "rewrote it in Rust and it's X times faster" stories are generally when people rewrite from very slow (Python) or medium fast languages (JavaScript or maybe even Go).
In those cases you can rewrite in Rust without considering performance and get amazing speedups. I recently did a straight 1:1 port of some Python code with zero extra optimisation effort and got a 45x speedup.
Sure I maybe could have got the same in C or C++ but there's no way I would have rewritten it in C++ because fuck segfaults and UB. I don't want to spend any more of my life debugging that.
> Rust has arrived at the complexity of Haskell and C++, each year requiring more knowledge to keep up with the latest and greatest.
I don't really know about Haskell, but I don't think Rust's complexity is anywhere close to as bad as C++'s. Even if it were it doesn't matter because in Rust if you forget some complex rule the compiler will tell you, whereas in C++ it will randomly crash but only in release mode after the program has been running for 2 hours. Totally different.
> The “Friendly” Community
Gotta agree with this though. The "we're friendly" thing is bullshit.
> Async is highly problematic
Also agree here. Async is a huge wart on Rust's otherwise relatively unblemished face. Big shame. Oh well. You can mostly avoid it, and there are some cases where it's genuinely good (e.g. Embassy).
> I feel like Rust is self-defined as a “systems” language, but it’s being used to write web apps and command-line tools and all sorts of things. > > This is a little disappointing, but also predictable: the more successful your language, the more people will use your language for things it wasn’t intended for.
I don't see why he's disappointed about this. Rust is great for command line tools and web backends.
> I think that the excellent tooling and dev team for Rust, subsidized by Big Tech, pulls the wool over people’s eyes and convinces them that this is a good language that is simple and worth investing in. There’s danger in that type of thinking.
Ok this guy is not worth listening to.
Second: I don't think this author disagrees with you that there are huge speedups to get from porting code out of Python. But you'd also get huge speedups porting to Java, and you wouldn't be in the business of managing your own memory lifecycles.
How can you take opinions like that seriously? It's like saying "nah The Beatles weren't actually that good, everyone just thought they were because of their cool sunglasses".
It's patronising and illogical and I don't think it's worth listening to nonsense like that.
> I think that the excellent tooling and dev team for Rust, subsidized by Big Tech, pulls the wool over people’s eyes and convinces them that this is a good language that is simple and worth investing in. There’s danger in that type of thinking.
Patronising and wrong.
and by that I don't mean the rhetorical or bait-style "i'm curious", no, the literal I'm curious, cause I'm trying to find use cases such as that these days and I'm often thwarted by the fact that for anything requiring remotely decent speeds, python most of the time already delegates to C extensions and so any rewrite is not as useful
Be sure you verify this is the case for whatever you think it is, though. Pure Python is so much slower than compiled languages (not just Rust) that you don't have to do much percentage-wise in pure Python before you've badly fallen behind in performance versus the pure-compiled alternatives.
I think this is asserted a lot more often then it is benchmarked. I am reminded of the way people for a long time asserted that the performance of web languages doesn't matter because you spend all your time waiting for the database, so it never mattered. People would just whip this argument out reflexively. It turns out that if you take a non-trivial codebase written in such a language and actually benchmark it, it is often not true, because as applications grow they tend to rapidly outgrow "all my code is just running a SELECT and slamming the results with minimal processing out to the web stream". I hear this a lot less often than I used to, probably through the slow-but-effective process of a lot of individuals learning the hard way this isn't true.
I've seen a lot of Python code. Very little of it that was not "data science" was just a bit of scripting around lots of large C-based objects, such that Python wasn't doing much actual work. And even some of that "data science" was falling back to pure Python without realizing because NumPy actually makes that shockingly easy.
There are further improvements possible around memory allocation and cachelines, but 2 days for 50x improvement was sufficient to not make it worth investing additional effort.
Edit: this was from a team who had _never_ touched Rust before.
A coworker of mine years ago was trying to parse out some large logfiles and it was running incredibly slowly (because the log file was huge).
Just for fun he profiled the code and found that 90% of the time was spent taking the timestamp ("2019-04-22 15:24:41") into a Python datetime. It was a slow morning, so we went back and forth trying to come up with new methods of optimizing this parsing, including (among other things) creating a dict to map date strings to datetime objects (since there were a lot of repeats).
After some even more profiling, I found that most of the slowdown happened because most of Python's strptime() implementation is written in Python so that it can handle timezones correctly; this prevented them from just calling out to the C library's strptime() implementation.
Since our timestamps didn't have a timezone specified anyway, I wrote my first ever C module[0] for Python, which simply takes two strings (the format and the timestamp) and runs them through strptime and returns a python datetime of the result.
I lost the actual benchmark data before I had a chance to record it somewhere to reproduce, and the Python 3 version in my repo doesn't have as much of a speedup compared to the default Python code, but the initial code that I wrote provided a 47x performance boost to the parsing compared to the built-in Python strptime().
Anyone who had a similar Python script and converted it wholesale to Rust (or C or Golang, probably) would have seen a similarly massive increase in performance.
Then I learned about the `--release` flag and it instantly became a 40x speedup. So that was nice.
Waiting 30s vs <1s puts it well within "anything requiring remotely decent speeds". But it was really about parsing a custom data format, nothing fancy. I haven't done comparison timings of the graph traversals, but everything is basically instantaneous in the Rust version and not in the Python.
Pretty much any code that is not just tying together external libraries?
So you can expect any code that heavily relies on the standard library to be slower than the Rust equivalent.
A purely interpreted language implementation (not JIT’d) like CPython is almost always going to have a 10x-100x slowdown compared to equivalent code in C/C++/Rust/Go or most other compiled/JIT’d languages. So unless your program spends the vast majority of time in C extensions, it will be much slower.
I was a bit surprised how much faster it was too. Apart from Python being dog slow the only thing I really changed was to use RegexSet which isn't available in Python. I didn't benchmark how much difference that made though; I just used it because it was obviously the right thing to do.
That's kind of the point. If you just do the obvious thing in Rust you get very good performance by default.
It's the same in C++ but then you're writing C++.
Now, if you're going to use RegexSet you're also smart enough to read "For example, it’s a bad idea to compile the same regex repeatedly in a loop" and say "Yeah, makes sense, I will not repeatedly compile the same regex". But some fraction of Python programmers won't read that - and it'll be very slow.
Still, as I understand it CTRE means if you just "use" the same expression over and over in your inner loop in C++ (with CTRE) it doesn't matter, because the regular expression compilation happened in compilation as part of the type system, your expression got turned into machine code once for the same reason Rust will emit machine code for name.contains(char::is_lowercase) once not somehow re-calculate that each time it's reached - so there is no runtime step to repeat.
This is a long way down my "want to have" list, it's below BalancedI8 and the Pattern Types, it's below compile-time for loops, it's below stabilizing Pattern, for an example closer to heart. But it does remind us what's conceivable.
I assume by CTRE you're referring to the CTRE C++ project. That's a totally different can of worms and comes with lots of trade-offs. I wish it were easy to add CTRE to rebar, then I could probably add a lot more color to the trade-offs involved, at least with that specific implementation (but maybe not to "compile time regex" in general).
I agree that there are trade-offs, but nevertheless compile time regex compilation is on my want list, even if a long way down it. I would take compile time arithmetic compilation† much sooner, but since that's an unsolved problem I don't get that choice.
† What I mean here is, you type in the real arithmetic you want, the compiler analyses what you wrote and it spits out an approximation in machine code which delivers an accuracy and performance trade off you're OK with, without you needing to be an expert in IEEE floating point and how your target CPU works. Herbie https://herbie.uwplse.org/ but as part of the compiler.
The annotated tl;dr is: Chris doesn't want to learn how hardware works, they don't want to learn how to write optimal software, they don't want to write safe software, they just want to write in a language they already know because they're not comfortable with learning other languages because their home language is a functional language (Haskell). It's a weird, yet short, essay that doesn't actually go anywhere.
I suspect Chris wrote the essay against their will, or because people asking them about Rust rubbed them the wrong way, because they lash out and say "This post still offends many who have tied Rust to their identity, but that’s their problem, not mine."
Its the Internet, man, if you're not offending someone, nobody is reading your shit.
I'm not OP, but since I don't know anyone on the Internet, I have the habit of using "they" for everybody, for example: "WildRookie? No, I don't know their real name, only that username."
It could be that OP has such a habit, which just carried over for referring to Chris, even though someone named "Chris Done" is probably a man (although, there are women who go by Chris).
What's the play here?