119 pointsby ladyanita224 days ago9 comments
  • daghamm4 days ago
    The fact that they could go from zero to proof of concept in a year and then into production just 2-3 years later is impressive. Given these results I feel the industry will slowly move to Rust once a certified toolchain for safety critical systems exists.

    At the same time, let's not forget that this is a highly competent team with tons of experience. It's not guaranteed that other developers can have the same success.

    • Cu3PO424 days ago
      Ferrocene [0] exists today! I'm mainly interested in the space from the point of view of a theoretical computer scientist, so I'm not sure if there's additional boxes that need to be checked legally, but it's looking pretty good to me.

      [0] https://ferrocene.dev/en/

      • steveklabnik4 days ago
        In the sense that there are a variety of requirements that need to be checked, there are. Each industry is different. Ferrocene has mostly been driven by automotive so far because that’s where the customers are, but they’ll get to all of them eventually.
        • Cu3PO422 days ago
          I really appreciate your reply!

          > In the sense that there are a variety of requirements that need to be checked

          Does "requirement" in this context refer to the same thing as a particular ISO/EN/... standard? Or do you mean that there are a multitude of standards, each of which make various demands and some of those might not yet be fulfilled?

          My wording was much more ambiguous than I intended. What I meant to convey was that I don't know what hurdles there are beyond conforming to the relevant certifications. I.e. in the automotive conetext, Ferrocene is ISO26262 certified, but is that sufficient to be used in a safety-critical automotive context, or are there additional steps that need to be taken before a supplier could use Ferrocene to create a qualified binary?

          • steveklabnik2 days ago
            No worries! And once again, just to be clear, I don't work directly in these industries, so this is my current understanding of all of this, but some details may be slightly off. But the big picture should be correct.

            It means a bunch of things: there are a multitude of standards, so just ISO 26262 isn't enough for some work, yes. But also, safety critical standards are different than say, the C standard. With a programming language standard, you implement it, and then you're done. Choosing to use a specific C compiler is something an organization does of their own volition, and maybe they don't care a ton about standardization, but being close enough to the standard is good enough, or extensions are fine. For example, the Linux project chose to use gcc specific extensions for C, and hasn't ever been able to work with just standard C. Clang wasn't possible until they implemented those gcc extensions. This is all fine and normal in our world.

            But safety critical standards are more like a standardized process for managing risk. So there's more wiggle room, in some sense. It's less "here is the grammar for a language" and more "here is the way that you quantify various risks in the development process." What this means is, so like, the government has a requirement that a car follows ISO 26262. How do you demonstrate that your car does this? Well, there are auditing organizations. The government says "hey, we trust TÜV SÜD to certify that your organization is following ISO 26262." And so, if you want to sell a car, you get in touch with TÜV SÜD or an equivalent organization, and get accredited. To put it in C terms, imagine if there was a body that you had to explain your C compiler's implementation-defined behavior to, and they'd go "yeah that makes sense" or "no, that's not a legitimate implementation." (by the way, I am choosing TÜV SÜD because that is the organization that certified Ferrocene.)

            Okay, so, I want to sell a car. I need to write some software. I have to convince TÜV SÜD that I am compliant with ISO 26262. How do I do that? Well, I have to show them how I manage various risks. One of those risks is how my software is produced. One way to do that is to outsource part of my risk management by purchasing a license for a compiler that also implements ISO 26262. If I was willing to go to the work of certifying my own compiler, I could use whatever I want. But I'm in the car business, not the compiler business, so it makes more sense to purchase a compiler like that. But that's fundamentally what it is, outsourcing one aspect of demonstrating I have a handle on risk management. Just because you have a certified compiler doesn't mean that any code produced by it is totally fine. It exists as one component of the larger project of demonstrating compliance. For example, all of the code I write may be bad. So while I don't have to demonstrate anything about the compiler other than that it is compliant, I'm gonna need to demonstrate that my code follows those guidelines. Ferrocene has not yet in my understanding qualified the Rust core or standard libraries, only the compiler, and so if I want to use those, that counts as similar to my own code. But this is what I'm getting at, there's just a lot more work to be done as part of the overall effort than "I purchased a compiler and now I'm good to go."

            I hope that helps.

            • Cu3PO4210 hours ago
              That was a very in-depth reply and is very appreciated. One point I did not expect that core and alloc are not qualified yet. In any case, you motivated me to do some research of my own to fill in the gaps in my own understanding. What follows is my attepmt to summarize all of this in the hope that you and anyone else reading it may also find it helpful.

              I want to take a step back: why does the automative industry care about certain qualifications? Because the legaslative mandates that they follow them so that cars are "safe". In Germany the industry is required to follow whatever the "state of the art" is. This is not necessarily ISO 26262, but it might be. It might also be one of the many DIN norms, or even a combination thereof.

              ISO 26262 concerns itself with mitigating risks and hazards introduced by safety-critical systems and poses a list of technical and non-technical requirements that need to be fulfilled. These concern both the final binaries and to some degree the development process. As you pointed out, the manufacturer needs to ultimately prove to some body that their binaries adhere to the standard. Use of a qualified compiler does not appear to be strictly necessary to achieve that. However, proving properties of a binary that is the result of a compilation process, is prohibitively difficult. We'd rather prove properties of our source code.

              However, proving properties of source code is only sufficient to show properties of the binary if the compilation process does not change the behavior of the program. This is where having a qualified compiler seems to come in. If my compiler is qualified, I may assume that it is sufficiently free of faults. Personally, I'd rather have a formally verified compiler, but that's obviously a much larger undertaking. (For C, CompCert [0] exists.)

              Now, as you point out, none of this helps if my own code is bad. I still need to certify my own code and Ferrocene can be a part of that. However, to circle back to my prior question of additional boxes that need to be checked: Yes, any Rust code written (and any parts of core, alloc, and std that are used) needs to be certified, but Ferrocene's rustc is ready to be used in software aiming for ISO26262 compliance today. No additional boxes pertaining to rustc need checking; although, qualified core and alloc would certainly be helpful.

              [0] https://www.absint.com/compcert/

      • daghamm4 days ago
        That looks very interesting.

        I think these sort of activities must come from outside because the core Rust team has currently no experience in these areas.

    • themoonisachees4 days ago
      Give this is already rolling off the production line, the toolchain they use must be certified iso 26262. More than actual engineering (though there still is some), the hard part is getting that certification at all, or you can't put it in a car.
  • dostick4 days ago
    After reading part of article I realised it’s about Rust the programming language. Not the rust-colored car called Rust as author obviously intended to confuse people with that image and ambiguous title.
    • philwelch4 days ago
      Or, indeed, that Volvo’s manufacturing quality had declined to the point that the cars themselves were shipping with their bodies literally already rusting
      • kevinventullo4 days ago
        This is absolutely how I initially read it. I’m surprised the title was approved by their comms team!
    • pahbloo4 days ago
      Rust (the language) is rolling off the Volvo assembly (not the language) line
    • wright084 days ago
      Huh interesting observation. I wonder if anything in the first two sentences of the article can shed any light.

      In my job I get to speak to lots of people about Rust. Some are just starting out, some have barely ever heard of it, and then some people are running Rust silently in production at a very large company in a very serious product.

      Yeah I've definitely heard of people "running (iron oxide) silently in production". Super ambiguous

    • aschla4 days ago
      It's almost as if it was an intentional artistic choice in phrasing... /s
  • MBCook4 days ago
    The article mentions a few times that Rust is a good choice because the code is NOT safety critical.

    Is that because the safety critical code requires the compiler/libraries/etc. to have some certification Rust currently lacks?

    If not I don’t understand why it’s phrased that way.

    • adrianN4 days ago
      Yes, that's the reason. Certification requirements usually force you to use some ancient niche toolchain.
      • AlotOfReading4 days ago
        And those ancient, niche toolchains are horribly buggy as a rule. For example, a certain Santa Barbara based vendor ships a high integrity compiler that you can crash with entirely normal standards compliant C/C++.
        • SilasX4 days ago
          That ... feels like it defeats the purpose of designating a special, certified library for safety-critical code.
          • AlotOfReading4 days ago
            Would it make you feel better if I told you that these kinds of offerings usually also don't offer modern validation tools like sanitizers? They expect people to just wing it with whatever the proprietary IDE happens to give them.

            A big part of the job of safety critical development is knowing the difference between box checking best practices/regulations and building actually safe systems so you can do both.

            • yjftsjthsd-h4 days ago
              I would assume the solution is to run your code through multiple compilers/toolchains - in dev, CI can run the certified compiler to make sure your code stays compatible with it, but also through modern clang/gcc with every linter and static analysis tool you can think of. Then for the "official" prod builds you use the certified compiler. Automated testing should probably use both, and even compare the behavior of binaries from each chain to look out for bugs that only exist in one. That way you get most of the benefits of both worlds (not all, since you can only write code that all compilers can handle).
              • AlotOfReading4 days ago
                That's a common partial solution, but it's not complete. For example, it essentially requires you to be able to observe all safety-relevant behaviors of the code in both compilers. This is a much more comprehensive degree of validation than almost any system actually achieves. You also run into issues where the behaviors you're observing (e.g. low bits in floating point results) depend on intimate details of codegen that aren't identical between compilers.

                The complete solution depends on the application and the integrity level. It's not one size fits all, but rather about producing documentation showing you've considered various failure modes and developed mitigations for them (or otherwise accept the risk). Sometimes that's binary analysis of the compiled output to ensure it meets some formal model, sometimes that's a formally proven, decent compiler like concert, and so on.

                An additional wrinkle is that the business model for high integrity compilers can also be a huge obstacle here. Some charge seats by how many people have modified the code that's running through the compiler. These aren't cheap licenses either, so companies have a large incentive not to use methodologies that require many eyes making all bugs shallow. There are also issues running these compilers in CI. They might require online license verification on every file, for example, or not allow ephemeral licensing at all.

              • steveklabnik4 days ago
                It isn’t. The idea is to quantify the risk. A buggy toolchain is okay if the bugs are known and you can demonstrate how you’re mitigating them. All software has bugs, you cannot rely on the idea of bug free software to ensure safety, you must have a process that is robust in the face of problems.
                • yjftsjthsd-h4 days ago
                  Sure, but if you can do something to reduce the number of bugs it seems like you should still do that?
                  • steveklabnik4 days ago
                    I don’t disagree that in general, reducing the number of bugs is a good goal, but there’s always a limit to how far you go. It’s not like every line is formally proven, for example. Just because you can use a specific technique doesn’t mean that you must.

                    But also I don’t work directly in these industries and so maybe my impression of this aspect of their processes is incorrect.

                    • tialaramex3 days ago
                      I actually think far more of these safety of life standards should require formal proof.

                      IMNSHO the standards were set so low so that C++ could clamber over the low bar, and it's a happy consequence in some sense that Rust has no trouble clearing it, but the bar should be raised considerably instead. Software crucial to the safe operation of an airliner ought to be proven correct, not just slapped together in any of the general purpose languages, including Rust and then subject to a bit more process than for a web app.

                      • adrianN3 days ago
                        That would explode the costs for marginal safety improvements. I think the same effort spent on better requirement engineering would yield more payoff.
                        • tialaramex3 days ago
                          Why do you believe this would "explode the costs" ?

                          Why would the resulting safety improvements be "marginal" ?

                          • adrianN2 days ago
                            Because proving correctness for complex software is difficult and very few people have relevant experience. So it is both labor intensive and you need to pay high wages. I believe the safety improvements are marginal because of my experience in safety critical development. Almost all the bugs we did not find by testing turned out to be problems with the requirements that led to interoperability issues. Proving the correct implementation of wrong requirements would not have helped.
          • estebank4 days ago
            Its not so much about ensuring the best tool chain is used, but rather about setting the lowest bound of quality. By being slow moving it avoids the potential for temporary regressions.

            It is also ass-covering by demonstrating you followed "industry standard procedure". If you do something different, even if it is quantifiably better, it might make for a stressful deposition explaining why the worse but standard approach wasn't used instead.

          • darthrupert4 days ago
            If the compiler crashes, no safety-breaking code was generated.
      • Eplankton3 days ago
        And apparently Segger Studio says NO to "old granpa style toolchain" by introduce this several weeks ago: https://www.segger.com/news/pr-240927-ozone-support-rust/
      • MBCook4 days ago
        Thanks. I figured that was the most likely.

        Is that being worked on? Rust seems like a much better choice than C or C++ to me.

      • steveklabnik4 days ago
        Ferrocene is breaking several norms in this area, and “ancient toolchains” is one of them. They’re able to certify new ones remarkably quickly.
        • yjftsjthsd-h4 days ago
          > They’re able to certify new ones remarkably quickly.

          Do you know how they do that? Is it something special about rust, or some process improvement they're doing?

          • steveklabnik4 days ago
            I have some knowledge but as an outsider.

            For example, rustc has a very large test suite that is run on every single commit. There is also a language reference that describes the language in some detail. One of the things Ferrocene brings to the table is the paperwork and auditing that the test suite corresponds to the specification. With other vendors developing their own toolchain, they would have to do all three parts of that work (well, in the case of C or C++, two ish not three, since they have a specification, but there are always extensions and platform specific behavior to document) instead of just one. This isn’t the only thing they do, but it’s one example.

            It’s not so much something special about Rust in an abstract sense, but in the practical sense that the Rust Project takes robust software engineering seriously, and being downstream of that is useful.

    • 4 days ago
      undefined
    • bmitc4 days ago
      The article also does absolutely nothing to motivate the choice of Rust. It's not like it's hard to find a better language than C or C++, so why Rust? Ada seems like it wasn't even considered, as the only mention of it was at the person's first job, some 15 years earlier. For example, if running on Android was a requirement, what made Rust a better choice than say Java or Scala?

      The article doesn't explain anything as to why Rust was chosen and why it was (supposedly) a win, as anything mentioned as a plus is superficial enough to be covered by dozens of other languages.

      • Larrikin4 days ago
        In Android development, Java is basically deprecated at this point and Scala never really worked well.
        • bmitc4 days ago
          So is it Kotlin?
          • Larrikin4 days ago
            Everything is Kotlin unless you have a specific need to use the NDK.
  • exabrial4 days ago
    Sorry for dumb question, but does rust have an LTS release? Seems like it’s a lot of nightly builds still
    • awestroke4 days ago
      There's a nightly build every night. There is a stable release every now and then. There are no official LTS releases. The question has been asked every now and then, but I see no real need for an LTS release. Just pick any version and keep using that specific version for as long as you want to. You could pick the same version as some LTS linux distro version.
      • GolDDranks4 days ago
        The stable release is every 6 weeks, i.e. once in a month and a half. Only the newest version at the time is supported.

        The qualified Ferrocene toolchain has "2 years of patch releases for select versions", so they have 2-year LTS releases, but that's a paid support plan.

        Overall, the Rust community hasn't felt much need for official LTS releases.

        • daghamm4 days ago
          Serious question:

          Can you really call it stable if it is updated every 6 weeks?

          • GolDDranks4 days ago
            Equally serious answer: yes, if it doesn't break backwards compatibilty between the updates.

            This is what the Rust project means by stable. You can update and your code will continue building. (There's a bunch of documented caveats though.) Rust has been stable in this sense since 1.0, almost ten years.

            Of course, you might have different semantics for "stable". Some seem to mean "rarely updating" or "each update is small" by that. In the latter sense, too, Rust has becoming stabler over the last few years.

            In the "rarely updating" sense, Rust is not going to change course. Frequent, time-based releases have demonstrably made the progress smoother, and in a sense, "stabler", as in, more predictable and bug-free.

      • bmitc4 days ago
        > but I see no real need for an LTS release

        What makes Rust special over other programming languages and operating systems and software systems that have LTS releases? For example, .NET and Ubuntu have LTS releases.

        • plorkyeran4 days ago
          .NET and Ubuntu are notably not programming languages. C# does not have LTS releases.

          LTS releases are for things which end up in your runtime environment. Compilers typically don't have LTS releases because there isn't much room for critical bugs which aren't discovered for a long time. Rustc (as with most AOT compilers) does not attempt to be safe to use on untrusted source code, so a bug when it's given a malicious file isn't a security vulnerability. It's theoretically possible for rustc to have a codegen bug which causes security problems in the code which it compiles, but in practice such things don't really happen and there's nothing unsafe about using a ten or twenty-year-old build of a compiler.

          LTS releases of the rust standard library could potentially need to become a thing. That could have bugs which need to be backported to old versions, and I assume it just hasn't really come up yet.

          • miki1232114 days ago
            That, and then there's the fact that due to editions, modern Rust compilers can still compile older Rust code, which isn't always the case for other languages.

            There's Python 3.10 code out there that won't run under 3.13, especially so if it relies on components written in C that use Python's C API. If you didn't have LTS releases for Python, you'd have a choice between constantly having to port your code to run under the latest Python version or using an older, insecure one.

            Rust doesn't have this problem, old Rust code should compile just fine under newer versions of the compiler and stdlib.

            • pjmlp4 days ago
              Editions sales pitch never mention the fact that only applies to small grammar changes, your code will break if there are library or semantic changes across editions.
              • tialaramex4 days ago
                Yes and no. Yes obviously this just works for "small" grammar changes such as introducing new keywords without fear (which is why Rust's async is called async, not "co_async") and doesn't need awkward keyword reuse like "requires requires" and "class enum". But it enables far more.

                Lets take an existing edition first, in Rust 2021 Edition (what you get today out of the box when you just start writing Rust) the array types impl IntoIterator. Which makes sense, why shouldn't I iterate over this array with a for loop ?

                But, Rust 1.0 could not possibly have provided this, how would it work? It didn't in Rust 1.0 you can't make an array into an Iterator.

                Now, if this was some obscure rarely used feature maybe you'd just say "Who cares" but this is IntoIterator which is used to make for loops work, so that's high profile. So in fact what happens is that a modern Rust compiler (in which there even is a 2021 edition) knows that in earlier Editions it should pretend that arrays did not impl IntoIterator. You can loop over them just fine, but mysteriously they don't impl IntoIterator, so that code which used to mean one thing (because they didn't implement this) still means what it used to.

                So that's an example of seamlessly making Rust 2021 edition have better semantics and yet all the old software still works.

                In 2024 edition the semantics of certain RPITs (Return Position Impl Trait, an existential type) with respect to lifetimes are expected to change. In most cases either what you wrote already is technically wrong but will now be correct, or, what you wrote was wrong but you got away with it and now you'll get told you got it wrong if you move editions.

                Editions is not a panacea but it's vastly better than the previous status quo, look at how miserable the situation is in Java, in C++, in Python. Vastly different approaches, worse results on all dimensions.

                • pjmlp3 days ago
                  I still don't see it being much different from using language switches, especially when language semantics, ABI across versions, and standard library are part of the whole upgrade story.

                  Additionally I have my doubts how long this will scale when Rust has like 40 years of history behind it.

                  • estebank3 days ago
                    > I have my doubts how long this will scale when Rust has like 40 years of history behind it.

                    We've been doing this for 10 years already, so if the cost is linear, it shouldn't be a problematic burden over another 30 years. It helps that editions don't need to be big. Just checked the codebase and there are exactly 70 gates for "at least edition X" (2018 21, 2021 19 and 2024 30) and 16 for "is edition 2015" (2 in the parser, most of the rest in name resolution).

          • bmitc3 days ago
            > .NET and Ubuntu are notably not programming languages. C# does not have LTS releases.

            As someone else already pointed out, this is incorrect. Installing the .NET SDK is how you install C# and F#, and both the C# and F# language versions are tied to .NET versions. Since .NET has LTS releases, so do C# and F#.

            My original comment already addressed why I mentioned Ubuntu. I didn't claim it was a programming language, and it doesn't matter that it isn't. In fact, that was the point of mentioning it.

            > Compilers typically don't have LTS releases because there isn't much room for critical bugs which aren't discovered for a long time.

            Having an LTS release doesn't mean that it doesn't get any bug or security fixes. It normally just means that it doesn't get new features.

          • pjmlp4 days ago
            Yes it does, C#, F#, VB, C++/CLI versions are tied to specific .NET versions.

            Not only do they depend on CLR changes, they also depend on the BCL that is shipped alongside.

        • AlotOfReading4 days ago
          C++ is the most direct comparison. Neither GCC nor Clang have LTS releases. MSVC does via Visual Studio, but I've never seen anyone list it as a benefit vs the other two. What advantage does LTS have for compiler toolchains if no one seems to want it?
          • pjmlp4 days ago
            They kind of do, that is why you get GCC 10 when GCC 15 is around the corner.

            ABI stability for one.

      • consteval4 days ago
        > but I see no real need for an LTS release

        For one, providing approved and certified toolchains for safety-critical systems.

  • 4 days ago
    undefined
  • slicktux4 days ago
    So, will RUST ever have standards for safety critical systems like C/C++. Example MISRA for car programs? Or is the migration or certification too expensive and time consuming?
    • BD1034 days ago
      You may be interested in Ferrocene[0], a version of the Rust toolchain that is vetted for critical systems like automobiles. It's offered by Ferrous Systems, the same people who help maintain Rust Analyzer (the de-facto LSP for Rust).

      [0]: https://ferrocene.dev/en/

      • AlotOfReading4 days ago
        Note that what Ferrocene is currently offering is a toolchain. Things like core and std are not part of the current certification package. It's an incredibly exciting offering, but it's not quite ready to ship today. The fact that the certified toolchain is just the normal, publicly available one is great too.
        • estebank4 days ago
          > The fact that the certified toolchain is just the normal, publicly available one is great too.

          And speaks to the standards of quality that the project holds itself to.

          • steveklabnik4 days ago
            Yes, Ferrocene is able to be qualified more easily in part by how good upstream development practices are. A lot of qualified compilers need to be written from scratch because the existing compilers do not do testing and other things that are required for qualification, but the upstream Rust project has a development process far closer to a safety qualified compiler than not. It’s something worth celebrating about rust as a project.
        • tialaramex4 days ago
          > Things like core and std are not part of the current certification package

          As with C++ I'm not sure this makes coherent sense because of the relationship between the language and some elements of the supporting libraries - with respect to `core` specifically, the Rust programming language requires some of core.

          Suppose you write a for loop. In Rust that's just sugar, and it's de-sugared into a loop that uses IntoIterator::into_iter, Iterator::next, Option::Some and Option::None which are all from the core library.

          • AlotOfReading3 days ago
            Hence the warning. They're not separable, but it's what the current state of the offering is. If you ship core anything depending on core or std, the burden is on you to ensure those parts of your code are appropriately qualified until they can get the situation sorted.
      • bmitc4 days ago
        I was recently looking into things along these lines, and my understanding of Ferrocene is that it's just regular Rust with extra tests added. I'd love to know if that's accurate or not. If it is accurate, I've wondered why Rust doesn't just include those tests in the core build.
        • steveklabnik4 days ago
          In my understanding, no additional tests, but some additional platform support.

          The issue isn't more tests upstream, it’s more the chain of responsibility for guaranteeing that results are connected to the specification and all of the paperwork that’s required, and ensuring it is accurate.

      • slicktux4 days ago
        Thank you for the reference! Will definitely look into this!
    • howenterprisey4 days ago
      I looked into MISRA specifically for my previous job and Rust effectively complies with most of it out of the box, and what's left is either inapplicable or not difficult to catch with further tooling.
    • monocasa4 days ago
      The ferrocene morph of the toplchian is ISO 26262 and ASIL-D, and as this blog post is about rust in the ECU of a production vehicle of a large manufacturer.

      It seems like it already has support in the relevant safety critical standards, at least in the automotive space.

      • RealityVoid4 days ago
        Different kind of standards. MISRA is a coding guidelines standard, ISO 26262 is a functional safety standard, it concerns the whole system, including processes and the tools used (here is where the certification from Ferrocene and HighTec step in). There is overlap, but they mostly do different things.
    • RealityVoid4 days ago
      SAE has a coding standard in the works for Rust in safety critical systems.

      https://standardsworks.sae.org/standards-committees/safer-ru...

  • 4 days ago
    undefined
  • _giorgio_4 days ago
    You don't want to see any rust in a brand new car.
  • truetraveller4 days ago
    When I see a Rust post, I don't buy into the hype. It's usually compared with raw C, even C++. This is not a good comparison IMHO, since C/C++ has too many footguns/confusions.

    Instead, compare it to a nicer/stricter "C" equivalent like Zig. Now, Rust doesn't shine as much.

    • stackghost4 days ago
      I actually think embedded is one of the few places where rust makes sense.

      In 99% of cases outside embedded, a GC'ed language would be better, but a long time ago someone started the meme that GC is slow, or that your users will notice the pauses, etc but those fears are massively overblown.

      But on embedded where resources are constrained and you can't run e.g. a JVM then Rust makes sense to me, since you can eliminate a whole class of errors from the get go.

      • Nullabillity4 days ago
        Ultimately, a lot of Rust's safety features (which I'd argue are desirable everywhere, regardless of performance) require the sort of precise tracking people complain about.

        And once you have that, you might as well use it to free memory too. The idea of "Rust but GC" is fundamentally nonsense, because for GC to make sense you'd first need to rip out so much of Rust's selling point to begin with.

        It mostly seems to come from a perspective of "But surely all this GC research must be good for something, right? Anything?", rather than a concrete idea of where the GC would actually help.

      • sham14 days ago
        I would imagine that the kinds of embedded systems like cars would probably benefit from Ada instead.

        Not only is there of course SPARK, if one wants to do formal verification, Ada has a proven track record in things like military applications. Of course, passenger cars don't have quite the same level of care needed as military stuff (although a lot of care is still needed since cars are hundreds if not thousands of kilograms and can absolutely kill people), but I could still see Ada being useful even in the automotive industry.

        • kstrauser4 days ago
          I’m not sure if that’s true. I’ve never been around things like airplane flight systems. I have been trained on other military hardware with instructions on how to reset it and under which circumstances. I wouldn’t be utterly shocked if an F-16 pilot learns that which such and such happens, you flip this switch back and forth 3 times and then turn this other knob to reboot the computer. You would never, ever get that to be an acceptable procedure for a car’s anti-lock brakes.
        • stackghost4 days ago
          Yeah I agree that Ada is probably technically superior but when has that ever swayed anyone.
          • carlmr4 days ago
            I'd say Ada is a great choice, but I wouldn't call it technically superior. Some Ada features are nice (e.g. delta types for fixed point calculations), some Rust features are nice (e.g. very expressive ML inspired type system with sum types, borrow checker helps with concurrency and memory lifetimes, ...).
      • pjmlp4 days ago
        Yes you can, that is the whole business of PTC, Aicas and microEJ.

        PTC real time JVMs are famously used in military deployments, and you surely don't want pauses in a battleship targeting computer system (Aegis), or missile tracking system (thales).

        • stackghost3 days ago
          >Yes you can, that is the whole business of PTC, Aicas and microEJ.

          I'm saying that not all embedded devices have the horsepower to run a JVM. Nobody's running Java on the automotive equivalent of an 8-bit AVR, for example.

          >you surely don't want pauses in a battleship targeting computer system (Aegis), or missile tracking system (thales).

          That'd be the 1% of times when it does matter that I alluded to previously.

          • pjmlp3 days ago
            An 8-bit AVR can't even run proper ISO C.
      • bmitc4 days ago
        I even think on embedded it's overblown in terms of not being able to use a GC language. C#, F#, OCaml, Java (and I suppose all JVM languages), Erlang, Elixir, etc. can run on embedded devices, including some microcontrollers. In my opinion, the software industry has dug its head in the sand thinking that only "hard-core" languages can be run on embedded systems. Even LabVIEW could be a choice if the limited SoM hardware selection is okay. It's just as performant (if not more so for multicore systems) and infinitely safer than C/C++.
        • AlotOfReading4 days ago
          There are embedded Java systems. I hope you never have to work with any.

          In general, embedded systems suffer from severe lack of tool developer attention. People standardize on the very few things that reliably work like C, C++, and printf debugging because they don't have the bandwidth for anything more. Anything outside the beaten track has a high chance of running into showstopping bugs halfway through a project and embedded teams are already struggling to find developer time in the typical situation of 1-10 people maintaining 1M+ LOC codebases.

          Rust is the first real alternative to C and C++ in decades because it's actually trying to address the ecosystem issues.

          • Eplankton3 days ago
            Not even "printf" is included in any standard, I'm afraid. Arm's Keil MDK toolchain has an typical implemetation of baremetal C/C++ environment called microlib, but unfortunately it doesn't support RTOS because of the missing of re-entrancy, so you have to provide or use a third-party alternative.
          • bmitc3 days ago
            I think the thing is, if you don't use C or C++, then you don't need a million lines of code.
            • AlotOfReading3 days ago
              I'm not stopping you from trying to put your theories to the test and I'm not saying the current reality is good. However, I think you'd be surprised how complex some of these systems are. An automotive system like the article is describing is a distributed realtime system of anywhere from dozens to hundreds of networked processors built without traditional operating system support. It's frankly a miracle they work at all.
              • bmitc3 days ago
                While I don't work in the automotive field, I've worked on adjacent-esque distributed systems with some of the shared protocols (e.g., CANopen). Part of my lament here is that almost no effort has been put into anything other than running C and C++ on embedded systems. While hard real-time systems are a thing, other pieces have often still been implemented in C and C++, which is a shame. And it's also a shame that more effort hasn't been put into realtime garbage collectors, especially in this age of multicore embedded CPUs.

                I fully agree that it's a miracle any of the existing stuff works at all. I honestly have no idea how C and C++ developers make it work. Despite being the oldest and most used languages, the tooling is atrocious.

    • timeon4 days ago
      > Now, Rust doesn't shine as much.

      Except for memory safety.

    • itishappy4 days ago
      Zig looks quite interesting, and I'd love to read about how it's currently being used in production too! Anybody got that article?
      • tmikaeld4 days ago
        The most famous project is Bun, the Javascript runtime
    • pjmlp4 days ago
      Zig is Modula-2 type system with revamped syntax for C folks.