It is true that some decisions people make aren't rational, and it may even be true that most decisions most people make aren't entirely rational, but the claim that the whole software market, which is under selective pressures, manages to make irrationally wrong decisions in a consistently biased way is quite extraordinary and highly unlikely. What is more likely is that the decisions are largely rational, just don't correspond to your preferences. It's like the VHS vs. Betamax story. Fans of the latter thought that the preference for the former was irrational because of the inferior picture quality, but VHS was superior in another respect - recording time - that mattered more to more people.
I was programming military applications in Ada in the nineties (also not memory-safe, BTW) and I can tell you we had very good reasons to switch to C++ at the time, even from a software correctness perspective (I'm not saying C++ still retains those particular advantages today).
If you think so many people who compete with each other make a decision you think is obviously irrational, it's likely that you're missing some information.
Cyber Security itself is an example of this. It may seem rational to want guarantees of security for the entire supply chain. But that simply isn't possible in reality.
A professional effort is the judicious application of resources to the highest priorities. That includes care in design and testing. Applications built with C and C++ are running everywhere around the world, every minute of every day.
The root of the problem is measurement. Speed is one of the few dimensions of software that is trivially quantifiable, so it becomes the yardstick for everything. This is textbook McNamara Fallacy[1]: what is easy to measure becomes what is measured, and what is not easily measured is erased from the calculus. See developer velocity, cognitive overhead, maintainability, and joy. It's the same fallacy that McNamara made in Vietnam and Rumsfield made in the War on Terror so at least they're in good company.
This singular focus distorts decisions around language choice, especially among the inexperienced, who haven't yet learned to recognize trade-offs or to value the intangibles software process. Like you said, humans are irrational, but this is one particularly spectacular dimension of that irrationality.
I find it hard to reconcile this with the actual observed trend of all software getting slower and more memory intensive over time
Application performance is a very important factor. To ignore it is foolish.
0. https://stackoverflow.com/questions/28426191/how-to-specify-...
1. https://hackage.haskell.org/package/range-0.3.0.2/docs/Data-...
Ada does not have curly braces.
Stephen Bourne, author of the Bourne shell, used macros to make C memory unsafe language look like Algo 68 memory safe language.
I think it's also important not to centre Rust alone. In the larger picture, Rust has a combo of A) good timing, and B) the best evangelism. It stands on decades of memory safe language & runtime development, as well as the efforts of their many advocates.
If you look at what unsafe languages are used for, it mostly falls into two camps (ignoring embedded). You have legacy code e.g. browsers, UNIX utilities, etc which are too expensive to rewrite except on an opportunistic basis even though they could be in principle. You have new high-performance data infrastructure e.g. database kernels, performance-engineered algorithms, etc where there are still significant performance and architectural advantages to using languages like C++ that are not negotiable, again for economic reasons.
Most of the "resistance" is economic reality impinging on wishful thinking. We still don't have a practical off-ramp for a lot of memory-unsafe code. To the extent a lot of evangelism targets these cases it isn't helpful. It is like telling people living in the American suburbs that they should sell their cars and take the bus instead.
There are critical systems today that are essentially Prince Rupert’s drops. Mightily impressive, but with catastrophic weaknesses in the details.
I'm wondering what the cost would be of rewriting Chrome, at 20 to 30 million lines of code, in Rust?
I suspect that despite the memory unsafety, the cost of maintaining it in its current form is vastly lower than this.
Plus, any rewrite will certainly introduce new bugs, some of them temporarily serious. Did you see the post years back about a Rust program that exhibited the Heartbleed bug?
These new bugs need to be taken into account when estimating the cost of rewrite.
https://compat-table.github.io/compat-table/es6/
Chrome is currently unable to support a feature that was added to JavaScript in 2015 for ECMAScript 6.
The reason given was something about proper tail calls being beyond the technical capabilities of the teams involved.
If the code in or surrounding Chrome and its underlying V8 engine are currently so unmaintainable that the teams cannot incorporate a JavaScript feature from 10 years ago, then the cost of merely maintaining the C++ codebase is too high.
The all-or-nothing, now-or-never framing makes the change feel more intimidating than it would be in practice. Mozilla's strategy is to incrementally use Rust more and more in their C++ codebase. I don't know what Chrome's plan is, but the fact that Mozilla is able to make progress is an indication that it isn't impossibly expensive to do better. Mozilla is a non-profit, while Google's Q1 2025 revenue was $77.3 billion.
> Did you see the post years back about a Rust program that exhibited the Heartbleed bug?
Do you remember the actual Heartbleed bug?
> 20 to 30 million lines
In my own experience, seasoned engineers often remind me that every line of code is a liability. Tens of millions of lines of C++ that work closely with the internet sounds like quite the surface area.
Vividly. I spent a full week on remediation, even though the risk we had was traced to a single linux box exposed to the internet that had tens of kb of traffic over the last year.
Being proactive, we reissued all certificates for all of our internally deployed ssl points.
> In my own experience, seasoned engineers often remind me that every line of code is a liability. Tens of millions of lines of C++ that work closely with the internet sounds like quite the surface area.
No question. I don't question the wisdom of rewriting all of it in Rust. Having spent 60 years in the software business, I have a feeling for the size of the effort. And for what it is worth, I don't have any doubt about the competency of the teams involved.
We're really talking about resistance to memory safety in the last redoubts of unsafety: browsers and operating systems.
And control systems, c++ (along with PLCs ofcourse) dominates in my experience from developing maritime software and there doesnt appear to be much inclination towards change.
And the VMs for the two languages that you mentioned above (edit: though to be fair to your comment, I suppose those were initially written 20+ years ago).
And probably lots of robotics, defense, and other industries
Granted, those aren’t consumer problems, but I would push back on the “last redoubts”.
We should absolutely move toward memory safe languages, but I also think there are still things to be tried and learned
.. and other performance critical areas like Financial applications (HFT), High Performance Computing (incl. AI/ML), embedded, IoT, Gaming/Engines, Databases, Compilers etc.. Browsers and OS are highly visible, but there is a gigantic ton of new C++ code written everyday in spite of the availability of memory safe languages.
There are plenty of people, though, who argue that everything must be memory safe (and therefore rewritten in Rust :) I personally don't agree with that sentiment and it seems like you don't agree either.
Was a fascinating detective story to illustrate it.
Unlike python or java, it’s both compiled and fast
I wrote performance-engineered Java for years. Even getting it to within 2x worse than performance-engineered C++ took heroic efforts and ugly Java code.
Ok, just glanced at my corp workstation and some Java build analysis server is using 25GB RES, 50GB VIRT when I have no builds going. The hell is it doing.
Java is also fairly greedy with memory by default. It likes to grow the heap and then hold onto that memory unless 70% of the heap is free after a collection. The ratios used to grow and shrink the heap can be tuned with MinHeapFreeRatio and MaxHeapFreeRatio.
Allocating a heap of the size it was configured to use, probably.
1. Resource and reuse objects that otherwise are garbage collected. Use `new` sparingly.
2. Avoid Java idioms that create garbage, e.g. for (String s : strings) {...}, substitute with (int i = 0, strings_len = strings.length(), i < strings_len) { String s = strings[i]; ...}
That said, it might be useful. The demo case is contrived, though. Passing Rust async semantics into C code is inherently iffy. I'd like to see something like OpenJPEG (a JPEG 2000 encoder written in C) safely encapsulated in this way.
I'm just wondering in the explanation of listing 2 you say:
> a discriminant value indicating the enum’s active variant (4 bytes)
As far as I can find, there's no guarantee for that, the only thing I can find is that it might be interpreted as an `isize` value but the compiler is permitted to use smaller values: https://doc.rust-lang.org/reference/items/enumerations.html#...
Is there any reason to say it should be 4 bytes?
It doesn't change any of the conclusions, I'm just curious
But then again, modeling a C enum to a Rust enum is bad design. You want to use const in Rust and match against those.
But it is a bad example in general, because the author passes on a pointer of a string slice to FFI without first converting it to a CString, so it isn't null terminated.
That makes sense, they just don't use repr(C) for the PrintResult so I didn't consider that.
> But then again, modeling a C enum to a Rust enum is bad design. You want to use const in Rust and match against those.
That makes sense but if there could be a way to safely generate code that converts to an enum safely as proposed in the article that would be good as the enum is more idiomatic.
> But it is a bad example in general, because the author passes on a pointer of a string slice to FFI without first converting it to a CString, so it isn't null terminated.
The signature for async_print in C is `async_res_t async_print(const *uint8_t, size_t)` and they are passing a pointer to a &[u8] created from a byte string literal, so I think it's correct.
Just the syntax is miserable punctuation soup to start with.
They are in particular careful to never state that bindgen emits the wrong code. Maybe they could have said that bindgen in fact does handle this case correctly. But Omniglot seems to be doing a lot more than bindgen, and
--constified-enum <REGEX> Mark any enum whose name matches REGEX as a series of constants
--constified-enum-module <REGEX> Mark any enum whose name matches REGEX as a module of constants
IMO, saying bindgen avoids the issue presented in the article is not accurate.
edit: formatting
You can force it to generate Rust enums, but it doesn't by default.
The referenced footnote, [9], leads to: https://www.usenix.org/conference/osdi25/presentation/schuer...
Also I think there's other great things about Rust other than _just_ memory safety
In a language with the `unsafe` construct and effectively no automated tooling to audit the uses of it. You have no guarantee of any significance. You've just slightly changed where the security boundary _might_ lie.
> There is a great amount of software already written in other languages.
Yea. And development of those languages is on going. C++ has improved the memory safety picture quite a bit of the past decade and shows no signs of slowing down. There is no "one size fits all" solution here.
Finally, if memory safety were truly "table stakes" then we would have been using the dozens of memory safe languages that already existed. It should be blindingly obvious that /performance/ is table stakes.
In a language with the `unsafe` construct and effectively no automated tooling to audit the uses of it.
You can forbid using unsafe code with the lints built into rustc: https://doc.rust-lang.org/stable/nightly-rustc/rustc_lint/bu...Cargo allows you to apply rustc lints to the entire project, albeit not dependencies (currently). If you want dependencies you need something like cargo-geiger instead. If you find unsafe that way, you can report it to the rust safety dance people, who work with the community to eliminate unsafe in crates.
All of this is worlds ahead of the situation in C++.
However, if I can apply a nitpicking attitude here that you're applying to their argument about the ease with which unsafe can be kept out of a complex codebase. unsafe is pretty baked into the language because there's either simply convenient constructs that the Rust compiler can't ever prove safely (e.g. doubly-linked list), can't prove safely today (e.g. various accessors like split), or is required for basic operations (e.g. allocating memory). Pretending like you can really forbid unsafe code wholesale in your dependency chain is not practical & this is ignoring soundness bugs within the compiler itself. That doesn't detract from the inherent advantage of safe by default.
It's not easy in Rust, but it's possible.
C++ has artificially limited how much it can improve the memory safety picture because of their quite valid dedication to backwards compatibility. This is a totally valid choice on their part but it does mean that C++ is largely out of the running for the kinds of table stakes memory safety stuff the article talks about.
There are dozens of memory safe languages that already exist: Java, Go, Python, C#, Rust, ... And a whole host of other ones I'm not going to bother listing here.
None of them have a single implementation. It only took a few minutes to find all the following:
* https://en.wikipedia.org/wiki/Free_Java_implementations
* Go has gofrontend and GopherJS aside from the reference implementation
* Python has a whole slew of alternate implementations listed on the main Python web site: https://www.python.org/download/alternatives/
* C# has Mono, which actually implements the entire .NET framework
* Rust has Rust-GCC and gccrs
Before Microsoft opened-up C#, Mono was a completely independent alternative implementation.
Python has CPython (reference open source implementation), but also PyPy, MicroPython and several others.
Has Oracle dedicated those to the public domain in the meantime? Or at least licensed them extremely permissively?
More importantly, is there a public body that owns the spec?
To use your own terminology, this is clearly and objectively false. The US Supreme Court made no such finding.
What the court concluded was that even if Oracle had a copyright on the API, Google's use of it fell under fair use so that making a ruling on the question of whether the API was protected by copyright was moot.
But who cares if there's a public body who owns the specification? The Supreme Court ruled Google's use of the copyrighted APIs fell within fair use. That gives, within the US (other countries will have other legal circumstances) a basis for anyone to copy pretty much any language so long as they steer clear of the actual copyrighted source code (don't copy MS's C# source code, for instance) and trademark violations.
You claim to be a lawyer, I doubt your reading comprehension is really this bad but just in case I'll spell it out for you. You asked:
> More importantly, is there a public body that owns the spec?
And I answered:
> For C# there is the ECMA specification for it https://ecma-international.org/publications-and-standards/st...
Anyone can implement a compiler or interpreter for C# if they want, and there is a link to the standard for it. Is this clear enough for you?
Also, from an earlier comment you made a false claim and a strange reference.
You claimed that "most of" Java, Rust, C#, Python, and Go have only a single implementation. This is false. There are multiple implementations of each.
Second, you make a bizarre reference to "fad[ing] away like Pascal." Why do you think Pascal faded? I'll give a hint: It had nothing to do with being proprietary. At best that reference is a non sequitur, at worst it demonstrates more confusion on your part.
Something being proprietary means that it is owned. It means "relating to an owner or ownership"; "of, relating to, or characteristic of an owner or title holder"; "used, made, or marketed by one having the exclusive legal right"; "privately owned and managed and run as a profit-making organization."
It is literally illegal for me to start marketing The TorstenVL Rust Compiler. Because the language is proprietary.
Trademarks are annoying but I can hardly imagine they're what anyone is worried about when picking a language in this context, they're not what's going to cause a language to disappear.
What you can call something and whether you can legally make the thing or have a permissive license to an existing implementation are two completely unrelated things.
For example, you also can't make a C compiler and name it the "microsoft C compiler" due to your lack of trademark right. Does that mean C is also proprietary?
See also: The most famous open source project is trademarked https://www.linuxfoundation.org/legal/trademark-usage
If you still aren't convinced, you are definitely using a different definition of the word proprietary than everyone else.
> Proprietary software is software that grants its creator, publisher, or other rightsholder or rightsholder partner a legal monopoly by modern copyright and intellectual property law to exclude the recipient from freely sharing the software or modifying it, and—in some cases, as is the case with some patent-encumbered and EULA-bound software—from making use of the software on their own, thereby restricting their freedoms.[1]
> Proprietary software is a subset of non-free software, a term defined in contrast to free and open-source software; non-commercial licenses such as CC BY-NC are not deemed proprietary, but are non-free. Proprietary software may either be closed-source software or source-available software.[1][2]
The Python development team, via the LICENSE file in the GitHub repository, tells me
> All Python releases are Open Source (see https://opensource.org for the Open Source Definition). Historically, most, but not all, Python releases have also been GPL-compatible; the table below summarizes the various releases.
This license is also described on Wikipedia at https://en.wikipedia.org/wiki/Python_License .
Similarly, the reference Rust implementation (https://github.com/rust-lang/rust) is licensed under Apache 2.0 and MIT licenses.
In what sense is FOSS software a kind of non-FOSS software?
exclusive legal right = not permissively licensed. it's really not a matter of jargon.
EDIT: nvm parent is confused and thinks trademarks are the same as nonfree.
Industry is seeing quantifiable improvements, eg: https://thehackernews.com/2024/09/googles-shift-to-rust-prog...
Nah, there's a famous WG21 (the C++ committee) paper named "ABI: Now or Never" which lays out just some of the ever growing performance cost of choices the committee has made to preserve ABI and explains that if this cost is to be considered a price paid for something the committee needs to pick "Never" and if they instead want to stop paying the price they need to pick "Now" and, if as the author suspects, they don't actually care, they should pick neither and C++ should be considered obsolete.
The committee, of course, picked neither, and lots of people who were there have since defended this claiming that this was a false dilemma - they were actually cleverly picking "Later" which that author didn't offer. Each time they've repeated this more time has passed yet they're still no closer to this "Later" ...
I think a big part of it is just inertia.