At the same time, let's not forget that this is a highly competent team with tons of experience. It's not guaranteed that other developers can have the same success.
> In the sense that there are a variety of requirements that need to be checked
Does "requirement" in this context refer to the same thing as a particular ISO/EN/... standard? Or do you mean that there are a multitude of standards, each of which make various demands and some of those might not yet be fulfilled?
My wording was much more ambiguous than I intended. What I meant to convey was that I don't know what hurdles there are beyond conforming to the relevant certifications. I.e. in the automotive conetext, Ferrocene is ISO26262 certified, but is that sufficient to be used in a safety-critical automotive context, or are there additional steps that need to be taken before a supplier could use Ferrocene to create a qualified binary?
It means a bunch of things: there are a multitude of standards, so just ISO 26262 isn't enough for some work, yes. But also, safety critical standards are different than say, the C standard. With a programming language standard, you implement it, and then you're done. Choosing to use a specific C compiler is something an organization does of their own volition, and maybe they don't care a ton about standardization, but being close enough to the standard is good enough, or extensions are fine. For example, the Linux project chose to use gcc specific extensions for C, and hasn't ever been able to work with just standard C. Clang wasn't possible until they implemented those gcc extensions. This is all fine and normal in our world.
But safety critical standards are more like a standardized process for managing risk. So there's more wiggle room, in some sense. It's less "here is the grammar for a language" and more "here is the way that you quantify various risks in the development process." What this means is, so like, the government has a requirement that a car follows ISO 26262. How do you demonstrate that your car does this? Well, there are auditing organizations. The government says "hey, we trust TÜV SÜD to certify that your organization is following ISO 26262." And so, if you want to sell a car, you get in touch with TÜV SÜD or an equivalent organization, and get accredited. To put it in C terms, imagine if there was a body that you had to explain your C compiler's implementation-defined behavior to, and they'd go "yeah that makes sense" or "no, that's not a legitimate implementation." (by the way, I am choosing TÜV SÜD because that is the organization that certified Ferrocene.)
Okay, so, I want to sell a car. I need to write some software. I have to convince TÜV SÜD that I am compliant with ISO 26262. How do I do that? Well, I have to show them how I manage various risks. One of those risks is how my software is produced. One way to do that is to outsource part of my risk management by purchasing a license for a compiler that also implements ISO 26262. If I was willing to go to the work of certifying my own compiler, I could use whatever I want. But I'm in the car business, not the compiler business, so it makes more sense to purchase a compiler like that. But that's fundamentally what it is, outsourcing one aspect of demonstrating I have a handle on risk management. Just because you have a certified compiler doesn't mean that any code produced by it is totally fine. It exists as one component of the larger project of demonstrating compliance. For example, all of the code I write may be bad. So while I don't have to demonstrate anything about the compiler other than that it is compliant, I'm gonna need to demonstrate that my code follows those guidelines. Ferrocene has not yet in my understanding qualified the Rust core or standard libraries, only the compiler, and so if I want to use those, that counts as similar to my own code. But this is what I'm getting at, there's just a lot more work to be done as part of the overall effort than "I purchased a compiler and now I'm good to go."
I hope that helps.
I want to take a step back: why does the automative industry care about certain qualifications? Because the legaslative mandates that they follow them so that cars are "safe". In Germany the industry is required to follow whatever the "state of the art" is. This is not necessarily ISO 26262, but it might be. It might also be one of the many DIN norms, or even a combination thereof.
ISO 26262 concerns itself with mitigating risks and hazards introduced by safety-critical systems and poses a list of technical and non-technical requirements that need to be fulfilled. These concern both the final binaries and to some degree the development process. As you pointed out, the manufacturer needs to ultimately prove to some body that their binaries adhere to the standard. Use of a qualified compiler does not appear to be strictly necessary to achieve that. However, proving properties of a binary that is the result of a compilation process, is prohibitively difficult. We'd rather prove properties of our source code.
However, proving properties of source code is only sufficient to show properties of the binary if the compilation process does not change the behavior of the program. This is where having a qualified compiler seems to come in. If my compiler is qualified, I may assume that it is sufficiently free of faults. Personally, I'd rather have a formally verified compiler, but that's obviously a much larger undertaking. (For C, CompCert [0] exists.)
Now, as you point out, none of this helps if my own code is bad. I still need to certify my own code and Ferrocene can be a part of that. However, to circle back to my prior question of additional boxes that need to be checked: Yes, any Rust code written (and any parts of core, alloc, and std that are used) needs to be certified, but Ferrocene's rustc is ready to be used in software aiming for ISO26262 compliance today. No additional boxes pertaining to rustc need checking; although, qualified core and alloc would certainly be helpful.
I think these sort of activities must come from outside because the core Rust team has currently no experience in these areas.
In my job I get to speak to lots of people about Rust. Some are just starting out, some have barely ever heard of it, and then some people are running Rust silently in production at a very large company in a very serious product.
Yeah I've definitely heard of people "running (iron oxide) silently in production". Super ambiguous
Is that because the safety critical code requires the compiler/libraries/etc. to have some certification Rust currently lacks?
If not I don’t understand why it’s phrased that way.
A big part of the job of safety critical development is knowing the difference between box checking best practices/regulations and building actually safe systems so you can do both.
The complete solution depends on the application and the integrity level. It's not one size fits all, but rather about producing documentation showing you've considered various failure modes and developed mitigations for them (or otherwise accept the risk). Sometimes that's binary analysis of the compiled output to ensure it meets some formal model, sometimes that's a formally proven, decent compiler like concert, and so on.
An additional wrinkle is that the business model for high integrity compilers can also be a huge obstacle here. Some charge seats by how many people have modified the code that's running through the compiler. These aren't cheap licenses either, so companies have a large incentive not to use methodologies that require many eyes making all bugs shallow. There are also issues running these compilers in CI. They might require online license verification on every file, for example, or not allow ephemeral licensing at all.
But also I don’t work directly in these industries and so maybe my impression of this aspect of their processes is incorrect.
IMNSHO the standards were set so low so that C++ could clamber over the low bar, and it's a happy consequence in some sense that Rust has no trouble clearing it, but the bar should be raised considerably instead. Software crucial to the safe operation of an airliner ought to be proven correct, not just slapped together in any of the general purpose languages, including Rust and then subject to a bit more process than for a web app.
Why would the resulting safety improvements be "marginal" ?
It is also ass-covering by demonstrating you followed "industry standard procedure". If you do something different, even if it is quantifiably better, it might make for a stressful deposition explaining why the worse but standard approach wasn't used instead.
Is that being worked on? Rust seems like a much better choice than C or C++ to me.
Although Ferrocene is working on that as well :) https://ferrocene.dev/en/
https://www.vector.com/int/en/news/news/safety-applications-...
So there's lots of focus on having a good alternative to C/C++
Do you know how they do that? Is it something special about rust, or some process improvement they're doing?
For example, rustc has a very large test suite that is run on every single commit. There is also a language reference that describes the language in some detail. One of the things Ferrocene brings to the table is the paperwork and auditing that the test suite corresponds to the specification. With other vendors developing their own toolchain, they would have to do all three parts of that work (well, in the case of C or C++, two ish not three, since they have a specification, but there are always extensions and platform specific behavior to document) instead of just one. This isn’t the only thing they do, but it’s one example.
It’s not so much something special about Rust in an abstract sense, but in the practical sense that the Rust Project takes robust software engineering seriously, and being downstream of that is useful.
The article doesn't explain anything as to why Rust was chosen and why it was (supposedly) a win, as anything mentioned as a plus is superficial enough to be covered by dozens of other languages.
The qualified Ferrocene toolchain has "2 years of patch releases for select versions", so they have 2-year LTS releases, but that's a paid support plan.
Overall, the Rust community hasn't felt much need for official LTS releases.
Can you really call it stable if it is updated every 6 weeks?
This is what the Rust project means by stable. You can update and your code will continue building. (There's a bunch of documented caveats though.) Rust has been stable in this sense since 1.0, almost ten years.
Of course, you might have different semantics for "stable". Some seem to mean "rarely updating" or "each update is small" by that. In the latter sense, too, Rust has becoming stabler over the last few years.
In the "rarely updating" sense, Rust is not going to change course. Frequent, time-based releases have demonstrably made the progress smoother, and in a sense, "stabler", as in, more predictable and bug-free.
What makes Rust special over other programming languages and operating systems and software systems that have LTS releases? For example, .NET and Ubuntu have LTS releases.
LTS releases are for things which end up in your runtime environment. Compilers typically don't have LTS releases because there isn't much room for critical bugs which aren't discovered for a long time. Rustc (as with most AOT compilers) does not attempt to be safe to use on untrusted source code, so a bug when it's given a malicious file isn't a security vulnerability. It's theoretically possible for rustc to have a codegen bug which causes security problems in the code which it compiles, but in practice such things don't really happen and there's nothing unsafe about using a ten or twenty-year-old build of a compiler.
LTS releases of the rust standard library could potentially need to become a thing. That could have bugs which need to be backported to old versions, and I assume it just hasn't really come up yet.
There's Python 3.10 code out there that won't run under 3.13, especially so if it relies on components written in C that use Python's C API. If you didn't have LTS releases for Python, you'd have a choice between constantly having to port your code to run under the latest Python version or using an older, insecure one.
Rust doesn't have this problem, old Rust code should compile just fine under newer versions of the compiler and stdlib.
Lets take an existing edition first, in Rust 2021 Edition (what you get today out of the box when you just start writing Rust) the array types impl IntoIterator. Which makes sense, why shouldn't I iterate over this array with a for loop ?
But, Rust 1.0 could not possibly have provided this, how would it work? It didn't in Rust 1.0 you can't make an array into an Iterator.
Now, if this was some obscure rarely used feature maybe you'd just say "Who cares" but this is IntoIterator which is used to make for loops work, so that's high profile. So in fact what happens is that a modern Rust compiler (in which there even is a 2021 edition) knows that in earlier Editions it should pretend that arrays did not impl IntoIterator. You can loop over them just fine, but mysteriously they don't impl IntoIterator, so that code which used to mean one thing (because they didn't implement this) still means what it used to.
So that's an example of seamlessly making Rust 2021 edition have better semantics and yet all the old software still works.
In 2024 edition the semantics of certain RPITs (Return Position Impl Trait, an existential type) with respect to lifetimes are expected to change. In most cases either what you wrote already is technically wrong but will now be correct, or, what you wrote was wrong but you got away with it and now you'll get told you got it wrong if you move editions.
Editions is not a panacea but it's vastly better than the previous status quo, look at how miserable the situation is in Java, in C++, in Python. Vastly different approaches, worse results on all dimensions.
Additionally I have my doubts how long this will scale when Rust has like 40 years of history behind it.
We've been doing this for 10 years already, so if the cost is linear, it shouldn't be a problematic burden over another 30 years. It helps that editions don't need to be big. Just checked the codebase and there are exactly 70 gates for "at least edition X" (2018 21, 2021 19 and 2024 30) and 16 for "is edition 2015" (2 in the parser, most of the rest in name resolution).
As someone else already pointed out, this is incorrect. Installing the .NET SDK is how you install C# and F#, and both the C# and F# language versions are tied to .NET versions. Since .NET has LTS releases, so do C# and F#.
My original comment already addressed why I mentioned Ubuntu. I didn't claim it was a programming language, and it doesn't matter that it isn't. In fact, that was the point of mentioning it.
> Compilers typically don't have LTS releases because there isn't much room for critical bugs which aren't discovered for a long time.
Having an LTS release doesn't mean that it doesn't get any bug or security fixes. It normally just means that it doesn't get new features.
Not only do they depend on CLR changes, they also depend on the BCL that is shipped alongside.
ABI stability for one.
For one, providing approved and certified toolchains for safety-critical systems.
And speaks to the standards of quality that the project holds itself to.
As with C++ I'm not sure this makes coherent sense because of the relationship between the language and some elements of the supporting libraries - with respect to `core` specifically, the Rust programming language requires some of core.
Suppose you write a for loop. In Rust that's just sugar, and it's de-sugared into a loop that uses IntoIterator::into_iter, Iterator::next, Option::Some and Option::None which are all from the core library.
The issue isn't more tests upstream, it’s more the chain of responsibility for guaranteeing that results are connected to the specification and all of the paperwork that’s required, and ensuring it is accurate.
Just a heads up, Rust does have further tooling. I wish they were more widely used
Here are three of them
https://github.com/creusot-rs/creusot
It seems like it already has support in the relevant safety critical standards, at least in the automotive space.
https://standardsworks.sae.org/standards-committees/safer-ru...
Instead, compare it to a nicer/stricter "C" equivalent like Zig. Now, Rust doesn't shine as much.
In 99% of cases outside embedded, a GC'ed language would be better, but a long time ago someone started the meme that GC is slow, or that your users will notice the pauses, etc but those fears are massively overblown.
But on embedded where resources are constrained and you can't run e.g. a JVM then Rust makes sense to me, since you can eliminate a whole class of errors from the get go.
And once you have that, you might as well use it to free memory too. The idea of "Rust but GC" is fundamentally nonsense, because for GC to make sense you'd first need to rip out so much of Rust's selling point to begin with.
It mostly seems to come from a perspective of "But surely all this GC research must be good for something, right? Anything?", rather than a concrete idea of where the GC would actually help.
Not only is there of course SPARK, if one wants to do formal verification, Ada has a proven track record in things like military applications. Of course, passenger cars don't have quite the same level of care needed as military stuff (although a lot of care is still needed since cars are hundreds if not thousands of kilograms and can absolutely kill people), but I could still see Ada being useful even in the automotive industry.
PTC real time JVMs are famously used in military deployments, and you surely don't want pauses in a battleship targeting computer system (Aegis), or missile tracking system (thales).
I'm saying that not all embedded devices have the horsepower to run a JVM. Nobody's running Java on the automotive equivalent of an 8-bit AVR, for example.
>you surely don't want pauses in a battleship targeting computer system (Aegis), or missile tracking system (thales).
That'd be the 1% of times when it does matter that I alluded to previously.
In general, embedded systems suffer from severe lack of tool developer attention. People standardize on the very few things that reliably work like C, C++, and printf debugging because they don't have the bandwidth for anything more. Anything outside the beaten track has a high chance of running into showstopping bugs halfway through a project and embedded teams are already struggling to find developer time in the typical situation of 1-10 people maintaining 1M+ LOC codebases.
Rust is the first real alternative to C and C++ in decades because it's actually trying to address the ecosystem issues.
I fully agree that it's a miracle any of the existing stuff works at all. I honestly have no idea how C and C++ developers make it work. Despite being the oldest and most used languages, the tooling is atrocious.