While C++ isn't perfect, has the warts of a 50 year's old language, and probably will never match Rust's safety, we would already be in a much better place if at least everyone used the tools at their disposal from the last 30 years.
While I would advise to use Rust for some security critical scenarios, there are many others where it is still getting there, and there are other requirements to take into account other than affine types.
Avoiding UB is a serious drain on productivity in C++, and every new language or library feature comes with additional pitfalls, increasing the mental load.
This is to say: The benefit of Rust is not actually about "security critical scenarios", but much more generally about delivering the same quality of code in a fraction of the time.
Better be 80% safer than none at all.
What I dislike of C++ is that it grew to become a monster of a language, containing all programming paradigms and ideas, good or bad, known to mankind.
It's so monstrously huge no human can hold its entire complexity in his head.
C++ allows you to do things in 10000 different ways and developer would do just that. Often in the same code base.
That being said, I would use a sane subset of C++ every day over Rust. It's not that I hate Rust or that I don't think is good, technically sound and capable. It just doesn't fit the way I think and I like to work.
I like to keep a simple model in mind. For me, the memory is just a huge array from which we copy data to CPU cache, move some to CPU registers, execute instructions and fetch data from the registers and put it again in some part of that huge array, to be used later. Rust adds a lot of complexity over this simple mental model of mine.
The problem with this is if you have a team working on a C++ product you will need some people who can catch memory bugs to review every code before merging. Even with this approach it still possible to missed some memory bugs since the reviewer need to fully understand each object lifetime, which is time consuming during code review.
I'm working on a company that run on a server application written in C/C++. The code base is very large and we always have memory bugs that required ASAN on production to fix the bugs. We have started migrating each part to Rust one year ago and we never have a single crash from Rust code. The reason we choose Rust is because it is a server application that computation intensive, latency sensitive and large amount of active connections.
Try keep using Rust until you comfortable with it and you will like it. It fit with your simple mental model. I can say this because I was a C++ users for the whole life and switched to Rust recently.
Nah, if you're trying to match every "new" with a "delete" during the code review, you've already lost the battle. You can probably succeed when the code is added, but then the edits start to flow and sooner or later it's gone. Reviews are mostly good to catch design problems, not bugs.
The only reliable approach I know is to have a strict rule of never mixing memory management with business logic. Nothing else works well enough but this one however works remarkably well.
Business logic should rely on containers, starting with simple unique_ptrs and vectors and going deeper and deeper into the custom land when appropriate. If you can't find a suitable standard container, you build a custom one. The principal difference of "writing a custom container when you need it" compared to "integrate custom memory-management into the business logic when you need it" is that containers are:
* well understood
* well tested
* relatively small code-wise
* almost never change once implemented
None of the above applies to the business logic, it's the complete opposite.
Think of it kind of like programming in Java: someone has to write the memory management and it's a hell of job. However once this is done, programming the ever-changing business logic is easy and safe.
You can live the same life in C++ AND also have the ability to put on the "Doomguy of the memory management" shoes whenever you feel like it. Just don't forget to take of the shoes of the "business logic guy" when you do it, you can't wear both at the time.
This is a big part of why Rust works. We also never have errors that we can't reproduce in development.
Yes, but that is _incredibly_ time consuming. You have to set up asan, msan, tsan, and valgrind. If you want linting you need to do shenanigans to wire up clang-tidy.
I also like simple mental models. I like not having to figure out the cmake modifications to pull in a new library. I like having a search engine when I need a new library for x. I like when libraries return Result<Ok, Err> instead of ping ponging between C libraries which indicate errors using retval flags or C++ libraries that throw std::runtime_error(). I like not dealing with void* pointer casting .
Give it a few years and it will be a very strong contender.
The true C successor.
For Rust, I kind of got tired of writing unsafe rust for embedded, but that’s addressable afaik. The real dealbreaker was that after 10k+ lines of code I still will pop open the source of a library that solves a simple problem and the code looks indecipherable. I also don’t really agree with the dependency explosion that cargo encourages.
Zig is very nice in that it has the most ergonomic struct usage I’ve encountered. The stdlib could really use some improvement though. Comptime is very cool, but I also worry if the community will get undisciplined with it.
- the build system is constantly changing in a breaking way (between releases some of the repos I have on GH no longer build and need their build.zig to be updated).
- the comptime section of the docs needs to be heavily expanded, I'd love to see them take common Go interfaces and redo them in Zig (like io.Writer, io.Reader), breaking down the process step-by-step. It took me a little bit longer than it should've to efficiently use comptime.
- a whole section dedicated to things like using WaitGroup and multithreading, for those coming from langs like Go. Also, higher-level concurrency primitives like channels would be fantastic.
- a better import system for external zig libraries, the zig fetch => .dependency => .root_module.addImport stuff is not as straightforward as it should be although for someone coming from C it definitely does feel like using meson
None of these are critical and again, all signs of Zig's "youth".
That can be done trough a library.
Would still like it as a first-class language construct.
But is much simpler, easier to read, easier to understand, easier to follow and easier to reason about. It's less verbose and more productive.
It feels like what C would look like had it been invented today.
Re: safety guarantees, much digital ink has been spilled on how Zig can give Rust a run for its money when it comes to safety.
When people say: Rust <=> C++, Zig <=> C; they forget that C++ was precisely meant to be an enhanced C, which is what Zig is trying to accomplish. They simply eschewed chasing complexity as the holiest of holy grails, which in turn leads to the cognitive load of writing/reading Zig code to be MUCH smaller than C++ or other langs in that space.
All that said, I'd never recommend a company build their product on Zig just yet, at least not with some kind of red telephone to the Zig team or a dedicated Zig developer, given it's still not fully mature.
Re: managing memory yourself, Zig's defer makes this much, much more straightforward than you would think, and feeding in your own allocators can simplify this in many cases (and make testing for leaks much easier).
Zig is decent as a systems programming language. It's good they don't add lots of features and keep it simple.
The only downside I see is companies aren't investing in it much.
- Rust doesn’t let you pretend that memory is a flat array of bytes - Single ownership of data can be annoying in some cases - The borrow checker pointing out that you’re trying to do something stupid with pointers (again) can be annoying
Of course, I’m of the opinion that the hassles are worth it, especially the borrow checker. Almost every time I have to fight the borrow checker, it’s because I haven’t thought properly about the pointers involved and tried to do something stupid.
Further, the borrow checker does not care about pointers, only references. With pointers, you are on your own. It is true that using pointers in Rust is more cumbersome than it could be. But it is much easier to compartmentalise the pointer parts into separate functions and expose references instead.
I agree that some paradigms and patterns are genuinely difficult to use, e.g. any intrusive data structure, but I do not see the contentious link between simple memory models and the borrow checker and the like.
And I guess I'm imprecise saying pointers where I mean borrowed values, but my point is that a borrow is just a pointer with additional type checking. More formally: C pointers are a complete but unsound formal system, whereas Rust borrows are sound but incomplete. And every time I get in a fight with the borrow checker, it's because I'm doing something unsound, not because the system is incomplete.
I’m not sure if the poster here is the post author, but it would be great if the author would consider filling out this survey that was recently released asking for feedback on the future of rust’s vision: https://blog.rust-lang.org/2025/04/04/vision-doc-survey.html
I’d love to see rust become the de facto standard for cross-language portable code by virtue of its ease of use, but as this and our experience highlights, there’s some way to go yet!
Nix, via the standard rust integration or via something like crane, is actually quite nice for building rust/C++ combo projects, so it’d be awesome if the team might consider this as a means of achieving reproducibility. I’d imagine they’d have an easier time of it than I did, given they are more familiar with their own build process.
Many people see this as a problem. The response to TypeScript choosing Go over Rust was pretty gross imho, no one should be abused for choosing a language.
While at the same time, the .NET team routinely talks about .NET image problem outside traditional Microsoft shops, which naturally decisions like this aren't helping a tiny bit.
At a given point after being a C# programmer for years I still encountered patterns that were completely unreadable to me.
Same thing will happen to Wasm as it decides to add more and more high level stuff “to avoid shipping multiple GCs” and “to get different languages to talk to each other.” As soon as you want to abstract over more than “a portable CPU and memory” you get into that mess.
Never worked in the past better than JVM and CLR, but let's keep trying.
C# did not “have to absorb every idea from F#”. This is not how programming language development works. You can read LDM notes at https://github.com/dotnet/csharplang/discussions?discussions... and specs are documented in the repo.
> rather than making F# a viable programming language
F# is a viable language aside from using specific few libraries that don’t play with it nicely or around writing ref struct heavy code. I’m not sure what makes you think it is not. In comparison, it is probably more viable for shipping products than Scala, Clojure, OCaml and Haskell.
C#: A lot of LINQ style code was really hard to grok for me. Like a language in a language. The language got really huge in general. While it was fine as a "better Java" for most purposes.
And Google did do exactly that with Fucshia, which doesn't seem to be going to power anything beyond Nest screens.
As for AOT compilation, there have been multiple approaches since the early days, and the latest, Native AOT is good enough for everything required to write a TypeScript compiler, including better WebAssembly support than the Go compiler, thanks to Blazor infrastructure.
Can you link to the abuse?
A Rustacean implied Go was not memory safe and that Microsoft couldn't understand the power of Rust. Steve Klabnik & others told them off. But other Rustaceans, like Patrick Walton, argued that Go has memory safety issues in theory.
Rustacean, Gopher... this is an embarrassing way of looking at it.
And, speaking of, Go is not a memory safe language when you reach for its concurrency primitives as it very easily lets you violate memory safety (as opposed to Rust, .NET and JVM, where instead you get logic bugs but not memory safety ones).
Some of the worst comments have been scrubbed but they might be in one of the internet archival sites.
Before anyone gets triggered and starts typing up a reply: "SUBSET" is the word I used.
There was also a Rewrite it in LISP post[1] fan. Where is the
"The Lisp community is unfortunately plagued by this subset of devs who are zealous (and downright toxic) in their shilling for their favorite language."
[1]https://github.com/microsoft/typescript-go/discussions/411#d...But that subset of devs has largely disappeared/moved on to other langs.
Somehow in the past 3-4 years it's only been that subset of Rust devs that have been wailing about: "WHY NOT REWRITE IT IN RUST?".
Often they're mostly the same type: anime pfp, walls of text of pompous technobabble as if they were some elite caste of arcane cyberpriests preaching the gospel of Rust, etc.
There's a reason "just rewrite it in Rust" has become such a meme.
Sure, you got me. If someone says the earth is actually a cube, I have to go defend the spherical chads.
In this case, it's the sheer disconnect between RIIR askers, and the sheer number of people painting Rust devs with a continent wide brush.
It's very in-group vs out-group reasoning.
Let's demonstrate it. For example: I'm a Java dev, and I see a Java dev, being a moron, so I'll say "What a moron". But if that guy was a C# dev, I'm going to say "C# devs are morons". See the error committed here?
> There's a reason "just rewrite it in Rust" has become such a meme.
Just because something is a meme, doesn't make it true, either. Like it was true around the time of Rust 1.0. But that's been like 10 years ago.
It isn't even the worst part about Rust community.
I have no idea how you can accuse me of this when I made a big disclaimer in my original comment:
>Before anyone gets triggered and starts typing up a reply: "SUBSET" is the word I used.
So? There is a subset of X lang (both bigger and smaller than Rust), complaining why not rewrite in X.
But only Rust devs ever get the flak. Because it's a played out meme, or something. Or it triggers the Rust dev, whatever.
Hey, that's a productive attitude when attempting to fix CI!
(or figuring out why smaller compiler output performs worse)
Rust is a really fantastic language but having worked on a mixed C++/Rust codebase I can see why they had so many issues. Rust just wasn't really designed with C++ interop in mind so it's kind of painful to use them together. Impressive that they made it work.
Especially because the fix is so easy, it could just be fixed by the compiler on the fly.
If you have a function
fn a(arg: Into<C>) {
expensive/extensive operations here
}
and call it two times a(&"test");
a(10);
The compiler will generate two functions fn a_str(arg: &str) {
/// expensive/extensive operations
}
fn a_u16(arg: u16) {
/// expensive/extensive operations
}
This can be fixed by just proxying the duplicated call like this fn expensive_ops(arg: C) {
// ...
}
fn a_str(arg: &str) {
let _arg: C = arg.into();
expensive_ops(_arg);
}
I know you know this, but Rust does provide essentially complete memory and lifetime safety if you stay within the bounds of safe. Standard C/C++ tooling has no way to even reliably detect memory safety violations, let alone fix them. It's trivial to write buffer overflows that escape ASAN, and missing a single violation invalidates the semantic meaning of the entire program (particularly in C++), which means virtually all nontrivial programs in C/C++ have UB somewhere (a point we disagree on).
Safe rust doesn't guarantee all the other possible definitions of safety, but neither does any other mainstream language. I don't think it serves any useful purpose to complain that the rust folks have oversold their safety arguments by "only" eliminating the biggest cause of safety issues. Stroustrup harps on this a lot and it comes across as very disingenuous given the state of C++.
In many scenarios you have to use "unsafe"
I don't agree, because unsafe is a part of the language. It's widely accepted practice in C & C++ to only use carefully chosen subsets of each language and this is enforced with linters and coding guidelines. You can straightforwardly ban unsafe the same way, or review uses more carefully, etc. The question is how difficult it is avoid writing such overflows.
It's nigh-impossible as far as I can tell. I already use formal methods, sanitizers, testing, static analysis, careful design, valgrind, intensive reviews, MISRA, etc. I can still quickly find new issues by firing up the fuzzer or looking at another team's code. Other large projects like Chrome and Linux have thousands of competent eyes on them and still deal with these issues too. What is everyone missing? e.g. signed overflow is not an issue at all because you just tell your compiler to turn them into traps.
Leaving aside the unnecessarily hyperbolic point enabling traps for my systems might literally kill someone, traps usually aren't a well supported operational mode. GCC's ftrapv is broken for example, ubsan isn't recommended for production, and GCC doesn't implement ubsan-minimal. MSVC doesn't support overflow traps at all, nor do most certified compilers. "Just use clang" obviously isn't what you're intending here, so I'm unsure how to interpret this.In regards to memory safety being the biggest issue, I'm referencing the "70% of high severity bugs" numbers that have been put out by Microsoft and the Chrome teams and repeated by CISA in their memory safety roadmaps.
It's great that you haven't experienced large numbers of memory safety issues, but I can only speak to the lived experience of heartbleed and others. I see memory safety issues daily. I don't see supply chain attacks frequently and given how much publicity accompanied the discovery of the XZ attack, I suspect that's true for others.
Microsoft and Google have a ton of legacy code, they need to have high performance because they’re pushing everything to the web in order to spy better on people, they always churn their software and they are a very juicy target. As far as I’m concerned, they should rewrite everything in Rust and stop telling other people what to do.
But of course, they also need to sell Rust to the public, otherwise they would run out of developers or would have to maintain everything themselves. Hence the cheerleading.
This blog post is much closer to the reality of using Rust in production. In fact I’d add a couple of pitfalls myself:
* original cheerleader gets bored of the Rust rewrite/moves on and the project dies.
* original cheerleader moves on and the project lives under maintenance with non-Rust programmers which do not enjoy working on it and delay and reject changes and/or feature requests.
Or perhaps you're missing the super-text, that all those things were insufficient to make C safe.
I am fully able to appreciate that memory safety is important and Rust stepped up the game in mainstream programming. I think this is cool. But the exclusive and exaggerated focus on this does more harm than good. Memory safety is certainly much more important for advertisement companies such as Google to secure their mobile spying platforms than it is for me. The religious drive to push Rust everywhere to achieve a relatively modest[1] practical improvement in memory safety clearly shows that some part of the community rather naively adopted the priorities of certain tech companies at the cost of other - maybe more relevant - things.
1. I am fully able to understand the 100% guarantees that Rust can provide when sticking to safe Rust are conceptionally a fundamental step forward compared to what C provides out of the box. But this should not be misrepresented as a huge practical step forward to what can be achieved in memory safety already in C/C++ if one cares about it.
Let's not pretend that anything is better on the traditional C/C++ side, where the approach is usually one or more of:
1. Vendoring dependencies in-tree. This can result in security problems from missing out on bugfixes upstream.
2. Reinventing functionality that would otherwise be served by a dependency. This can result in security problems from much less battle-tested, buggy in-house implementation. In closed-source code, this is effectively security by obscurity.
I've seen both of these cause issues in large C++ projects.
For reference, the Rust/Cargo ecosystem contains a lot of tools and infrastructure to address supply-chain security, but it will always be a difficult problem to solve.
Regardless it is cargo, vcpkg/conan, nuget, maven, npm,....
It isn't validated by legal and IT for upload into internal repos, doesn't get used.
The "memory safety" of rust is oversold since "safety" is not formally proven for the rust language. While anecdotally memory-related bugs seem less likely, rust without unsafe is not absolutely safe.
> If you do an experiment and say "C++" anywhere on the Internet, in a minute someone will chime in and educate you about the existence of Rust.
> I know examples when engineers rewrite code from Rust in Rust if they like to rewrite everything in Rust.
> our engineers become too nauseous from Rust poisoning
> So now they [Rust devs] can write something other than new versions of old terminal applications.
> someone shows PRQL, everyone else thinks "What a wonderful idea, and, also, Rust" and gives this project a star on GitHub. This is, by the way, how most of Rust projects get their stars on GitHub. It doesn't look like someone wants to use this language, but what we want is to ride the hype.
> we started to understand that it would be hard to get rid of Rust, and we could tolerate it.
It's a very shitty attitude and not even accurate. You see this attitude from old C/C++ devs quite a lot, it's just very weird that he has that attitude and then also seems to be simultaneously quite keen to use Rust. Very weird!
Anyway those are just the non-technical things. On the technical side:
> Fully offline builds
They solved it by vendoring but this is the obvious solution and also applies to C++.
> Segfault in Rust
They tried to do a null-terminated read of a string that wasn't null-terminated. Nothing to do with Rust. That would be an error in C++ too. In fact this is a strong argument for Rust.
> Panic
C/C++ code aborts. Or more commonly it crashes in a very difficult to debug way. I'll take panics any day.
> Sanitizers require nightly
Ok fair enough but this seems relatively minor.
> Rust's OpenSSL links with the system library by default and you have to set an environment variable to statically link it.
They set the environment variable. Frankly this is a million times easier than doing the same thing in C++.
I'll stop there, but overall it seems like a lot of "this is a problem we had with Rust" where it should really be more like "this is something we had to do when using C++ with Rust".
Weird vibe anyway.
I think your vibe is more weird. If people have issues with Rust, it is a "shitty attitude". While, of course, C/C++ just objectively suck, right?
Competent C++ developers are the first to admit that C++ objectively sucks. It's a bad language, but that doesn't mean there aren't good reasons to use it. Claiming that C++ is great is a weird hill to die on.
In the linked situation, the were using the library of a binary. This get into the tension between "make it easy for `cargo install` (and have a `cli` feature be default) and "make it easy for `cargo add` (and make `cli` opt-in).
This is not a great experience and we should improve it. There was an RFC to auto-enable features when a build-target is built (allowing `cli` to be opt-in but `cargo install` to auto-opt-in), rather than skip it, but the dev experience needed work, The maintainer can split the package which helps with semver for the two sides but needs to break one side to do so and if its the bin, people need to discover the suffix (`-bin`, `-cli`, etc).
Current workarounds:
- `cargo add skim` will show the `cli` feature is enabled and you can re-run with `--no-default-features`
- if `cli` wasn't a default, `cargo install skim` would suggest adding `--features cli`
This is precisely what Tokio does: by default, a panic in async code will only bring down the task that panicked instead of the whole application. In the context of a server, where you'll spawn a task for each request, you have no way to bring down the whole application (), only your current scope.
(): there could be other issues, like mutex poisoning, which is why nobody uses the stdlib's mutexes. But the general point still stands.
What does everyone use instead?
I don't remember where I read it, but it has been admitted that having synchronization primitives with poisoning in the stdlib was a mistake, and "simpler" ones without it.
for context: a mutex is poisoned should a panic occur while the mutex is held. it is then assumed the guarded data to be broken or in an unknown state, thus poisoned.
On a platform like Elixir, for example, you can deal with process crashes because everything runs on top of a VM, which is at all effects and purposes your OS, and provides process supervision APIs.
For servers that must not suddenly die, it's wise to use panic=unwind and catch_unwind at task/request boundaries (https://doc.rust-lang.org/stable/std/panic/fn.catch_unwind.h...)
In very early pre-1.0 prototypes Rust was meant to have isolated tasks that are killed on panic. As Rust became more low-level, it turned into terminating a whole OS thread on panic, and since Rust 1.9.0, it's basically just a try/catch with usage guidelines.
IMHO that is the sensible thing to do for pretty much any green thread or highly concurrent application. e.g. Golang does the same: panicking will only bring down the goroutine and not the whole process.
Or you could formulate this as needless obsession with not using `dyn`.
And sure generics are more powerful, dyn has limitations, etc. etc.
It's one of this "Misconceptions Programmers believe about Monomorphisation vs. Virtual Calls" things as in:
TL;DR: dyn isn't as bad as some people make it out to be; Weather perf. or convenience it can be the better choice. Any absolute recommendation of always use this or that is wrong.
- wrong: monomorphisation is always faster; reason: monomorphisation pollutes the instruction cache way worse, as such in some situations switching some parts (not all parts) to virtual calls and similar approaches can lead to major performance improvements. Good example here are various experiments about how to implement something like serde but faster and with less binary size.
- wrong: monomorphisation was picked in rust because it's better for rust; right: it was picked because it is reasonable good and was viable to implement with available resources. (for low level languages it's still better then only using vtables, but technically transparent hybrid solutions are even more desirable)
- wrong: virtual calls are always slow in microbenchmarks; right: while they are more work to do modern cpus have gotten very very good at optimizing them, under the right conditions the might be literally as fast as normal function calls (but most times they are slightly slower until mono. trashes icache too much)
- wrong: monomorphisation is always better for the optimizer; right: monomorphisation gives the optimizer more choices, but always relevant or useful choices but they always add more work it has to do, so slower compiler times and if you are unlucky it will miss more useful optimizations due to noise
- wrong: in rust generics are always more convenient to use; right: Adding a generic (e.g. to accomodate a return position impl trait) in the wrong place can lead you to having to write generic parameters all through the code base. But `dyn` has much more limitations/constraints, so for both convenience and performance it's a trade of which more often favors monomorphisation, but not as much as many seem to believe.
- wrong: always using dyn works; right: dyn doesn't work for all code and even if it would using it everywhere can put too much burden on the branch predictor and co. making vcalls potentially as slow as some people thing they are (it's kinda similar to how to much monomorphisation is bad for the icache and it's predictors, if we gloss over a ton of technical details)
So all in all understand what your tools entail, instead of just blindly using them.
And yes that's not easy.
It's on of the main differences between a junior and a senior skill level.
As a junior you follow rules, guidelines (or imitate other) when to use which tool. As a senior you deeply understand why the rules, guidelines, actions of other people are the way they are and in turn know when to diverge from it.
Links to "Better C++" is a PR for removing c++ template for build time. Unwinding stack in a "funny" way. PR comment saying something shouldn't be public and go on merging anyway.