194 pointsby vortex_ape5 days ago20 comments
  • lordnacho5 days ago
    For me, there's a headline draw, which is the borrow checker. Really great.

    But apart from that, Rust is basically a bag of sensible choices. Big and small stuff:

    - Match needs to be exhaustive. When you add something to the enum you were matching, it chokes. This is good.

    - Move by default. If you came from c++, I think this makes a lot of sense. If you have a new language, don't bring the baggage.

    - Easy way to use libraries. For now it hasn't splintered into several ways to build yet, I think most people still use cargo. But cargo also seems to work nicely, and it means you don't spend a couple of days learning cmake.

    - Better error handling. There's a few large firms that don't use exceptions in c++. New language with no legacy? Use the Ok/Error/Some/None thing.

    - Immutable by default. It's better to have everything locked down and have to explicitly allow mutation than just have everything mutable. You pay every time you forget to write mut, but that's pretty minor.

    - Testing is part of the code, doesn't seem tacked on like it does in c++.

    • n144q5 days ago
      > Match needs to be exhaustive.

      When I see people mention C++ with MISRA rules, I just think -- why do we need all these extra rules, often checked by a separate static analysis tool and enforced manually (that comes down to audit/compliance requirement), when they make perfect sense and could be done by the compiler? Missing switch cases happens often when an enum value is modified to include one extra entry and people don't update all code that uses it. Making it mandatory at compiler level is an obvious choice.

      • jpc05 days ago

          -Wswitch ¶
            Warn whenever a switch statement has an index of enumerated type and lacks a case for one or more of the named codes of that enumeration. (The presence of a default label prevents this warning.) case labels that do not correspond to enumerators also provoke warnings when this option is used, unless the enumeration is marked with the flag_enum attribute. This warning is enabled by -Wall.
        
        <https://gcc.gnu.org/onlinedocs/gcc/Warning-Options.html#inde...>

        The compiler can do that... And it's included in -Wall. It's not on by default but is effectively on in any codebase where anyone cares...

        Please don't argue about "but I don't need to add a flag in Rust" it's not rust, there's reasons the standard committee finds valid for why and honestly your welcome to implement your own compiler that turns it on by default just like the rust compiler which has no standard because "the compiler is the standard".

        • tialaramex5 days ago
          MISRA won't be OK with that.

          MISRA requires that you explicitly write the default reject. So -Wswitch doesn't get it done even though I agree that if C had (which it did not) standardized this requirement that would get you what you need.

          C also lacks Rust's non_exhaustive trait. If the person making a published Goose type says it's non-exhaustive then in their code nothing changes, all their code needs to account for all the values of type Goose as before - but everybody else using that type must accept that the author said it's non-exhaustive, so they cannot account for all values of this type except by writing a default handler.

          So e.g if I publish an AmericanPublicHoliday type when Rust 1.0 ships in 2015, and I mark it non-exhaustive since by definition new holidays may be added, you can't write code to just handle each of the holidays separately, you must have a default handler. When I add Juneteenth to the type, your code is fine, that's a holiday you must handle with your default handler, which you were obliged to write.

          On the other hand IPAddr, the IP address, is an ordinary exhaustive type, if you handle both IPv6Addr and IPv4Addr you've got a complete handling of IPAddr.

          • lnkl5 days ago
            "MISRA requires that you explicitly write the default reject."

            You can always use -Wswitch-enum then.

          • tialaramex5 days ago
            Ugh too late to catch myself, non_exhaustive is an attribute, not a trait.
        • IshKebab5 days ago
          > Please don't argue about "but I don't need to add a flag in Rust"

          Why not? It's a big issue. You say it's "on in any codebase where anyone cares", and I agree with that but in my experience most C++ developers don't care.

          I regularly have to work with other people's C++ where they don't have -Wall -Werror. It's never an issue in Rust.

          Also I don't buy that they couldn't fix this because it would be a breaking change. That's just an excuse for not bothering. They've made backwards incompatible changes in the past, e.g. removing checked exceptions, changing `auto`, changing the behaviour around operator==. They can just use the standard version, just like Rust uses Editions.

          Of course they won't, because the C++ standards committee is still very much "we don't need seatbelts, just drive well like me".

          • atq21195 days ago
            > I regularly have to work with other people's C++ where they don't have -Wall -Werror.

            To be fair, -Werror is kind of terrible. The set of warnings is very sensitive to the compiler version, so as soon as people work on the project with more than one compiler or even more than one version of the same compiler, it just becomes really impractical.

            An acceptable compromise can be that -Werror is enabled in CI, but it really shouldn't be the default at least in open-source projects.

            • j1elo5 days ago
              A common trope that is probably ignored or even unknown to uninformed C/C++ programmers, is that -Werr should be used for debug builds (as you use during development) and never for release builds (as otherwise it will most probably break compilation in future releases of the compiler)
              • motorest5 days ago
                > A common trope that is probably ignored or even unknown to uninformed C/C++ programmers, is that -Werr (...)

                Not even that. -Wall -Werror should be limited to local builds, and should never touch any build config that is invoked by any pipeline.

                • IshKebab5 days ago
                  No you definitely want to enforce this in CI.
                  • motorest5 days ago
                    > No you definitely want to enforce this in CI.

                    No, not really. It makes absolutely no sense to block builds for irrelevant things as passing unused arguments to a function.

                    • IshKebab4 days ago
                      > irrelevant things as passing unused arguments to a function.

                      That's not irrelevant. I have seen many bugs detected by that warning.

                    • jpc05 days ago
                      -Werror= and you can error on decide which warnings are errors. No reason to enable it globally
            • IshKebab5 days ago
              Yes that is the standard practice for open source projects (where it happens at all), but again that's another way in which C++ warnings are not even close to Rust errors.
          • motorest5 days ago
            > I regularly have to work with other people's C++ where they don't have -Wall -Werror.

            I think you inadvertently showed why this sort of thing: it's simply bad practice and a notorious source of problems. With -Wall -Werror you can turn any optional nit remark into a blocked pipeline requiring urgent maintenance. I know it because I had to work long hours in a C++ project that suddenly failed to build because a moron upstream passed -Wall -Werror as transitive build flags. We're talking about production pipelines being blocked due to things like function arguments being declared but not used.

            Sometimes I wonder if these discussions on the virtues of blindly leaning on the compiler are based on solid ground or are instead opinionated junior devs passing off their skinner box as some kind of operational excellence.

          • stefan_5 days ago
            Wall Werror is a nice idea that university professors will tell you about that collides at first contact with the real world where you are including 3rdparty headers that then spit 50 pages of incomprehensible GCC "overflow analysis" warnings.
            • IshKebab5 days ago
              You can use `-Isystem` for that. It isn't particularly well supported by C++ build systems, but also your assertion that third party headers don't compile with `-Wall -Werror` doesn't match my experience. Usually they're fine.

              > GCC "overflow analysis" warnings

              I think I've seen this with `fmt`, and it was a GCC compiler bug. Not much you can do about that.

        • SubjectToChange5 days ago
          >...honestly your welcome to implement your own compiler that turns it on by default just like the rust compiler which has no standard because "the compiler is the standard".

          The C and C++ standards are quite minimal and whether or not an implementation is "compliant" or not is often a matter of opinion. And unlike other language standards (e.g. Java or Ada) there isn't even a basic conformance test suite for implementations to test against. Hence why Clang had to be explicitly designed for GCC compatibility, particularly for C++.

          Merely having a "language standard" guarantees very little. For instance, automated theorem proving languages like Coq (Rocq now, I suppose)/Isabelle/Lean have no official language standard, but they far more defined and rigorous than C or C++ ever could be. A formal standard is a useful broker for proprietary implementations, but it has dubious value for a language centered around an open source implementation.

        • n144q5 days ago
          > It's not on by default but is effectively on in any codebase where anyone cares...

          Then why is this a MISRA rule by itself? Shouldn't it just be "every codebase must compile with -Wall or equivalent"?

          • jpc03 days ago
            I wouldn't be surprised if you could justify in a review the compiling with -Wall (probably more explicitly) catches this and therefore you can disregard the rule.

            Not all compilers have a -Wall equivalent, GCC, Clang and MSVC does but RANDOM_EMBEDDED_CHIP's custom compiler might not and that is a valid target for MISRA compliance.

            I doubt every single thing that needs MISRA get's compiled with an industry standard compiler, I wouldn't be surprised that GCC is the exception for most companies targeting MISRA compliance.

        • felipellrocha5 days ago
          but I don't need to add a flag in Rust
      • tialaramex5 days ago
        MISRA's rules are a real mix in three interesting senses

        Firstly, in terms of what the rules require. Some MISRA rules are machine checkable. Your compiler might implement them or, more likely, a MISRA auditing tool you bought does so. Some MISRA rules need human insight in practice. Is this OK, how about that? A good code review process should be able to catch these, if the reviewers are well trained. But a final group are very vague, almost aspirational, like the documentation requirements, at their best these come down to a good engineering lead, at their worst they're completely futile.

        Secondly in terms of impact, studies have shown some MISRA rules seem to have a real benefit, codebases which follow these rules have lower defect rates. Some are neutral, some are net negative, code which followed these MISRA rules had more defects.

        Thirdly in terms of what they do to the resulting software. Some MISRA rules are reasonable choices in C, you might see a good programmer do this without MISRA prompting just because they thought it was a good idea. Some MISRA rules prohibit absolute insanity. Stuff like initializing a variable in one switch clause, then using it in a different clause! Syntactically legal, and obviously a bad idea, nobody actually does that so why write a whole rule to prohibit it? But then a few MISRA rules require something no reasonable C programmer would ever write, and for a good reason, but it also just doesn't really matter. Mostly this is weird style nits, like if your high school English essay was marked by a NYT copy editor and got a D minus because you called it NASCAR not Nascar. You're weird NYT, you're allowed to be weird but that's not my fault and I shouldn't get penalized.

      • stefan_5 days ago
        Because MISRA is also insane and has long bled into a middle managers dream of a style guide? It would make for a terrible language (that ironically isn't much more "secure" "safe" "reliable")
    • tsimionescu5 days ago
      > Better error handling. There's a few large firms that don't use exceptions in c++. New language with no legacy? Use the Ok/Error/Some/None thing.

      I think this is still very much a debatable point. There are disadvantages to exceptions, mostly around code size and performance. But they are still the only error handling mechanism that anyone has found that defaults to adding enough context to errors to actually be useful (except of course in C++, because C++ doesn't like having useful constructs).

      Rust error handling tends towards not adding any kind of context whatsoever to errors - if you use the default error mechanisms and no extra libraries. That is, if you have a call stack three functions deep that uses `?` for error handling, at the top level you'll only get an error value, you'll have no idea where the value originated from, or any other information about the execution path. This can be disastrous for actually debugging hard to reproduce errors.

      • mr_00ff005 days ago
        I feel like your last point is the exact issue with exceptions, not rust’s errors. Exceptions are like having “?” on every single line.
        • tsimionescu5 days ago
          When an exception happens, you get a stack trace somewhere in your logs (unless you do something really weird). That doesn't always include all the information you'd like (for example, if the error happened in a loop, you don't get info about the loop variable).

          In contrast, unless you manually add context to the error (or use a library that does something like this for you, overriding the default ? behavior), you won't get any information about where an error occurred at all.

          Sure, with exceptions, you don't know statically where an exception might happen. But at runtime, you do get the exact information. So, if the error is hard to reproduce, you still have information about where exactly it occurred in those rare occasions where it happened.

          • tialaramex5 days ago
            > When an exception happens, you get a stack trace somewhere in your logs

            OK, so, if I write the canonical modern C++ Hello World, execute it against an environment where the "standard output" doesn't exist, where does this stack trace get recorded? Maybe it depends on the compiler and standard library implementation somehow?

            My impression is that in reality C++ just ignores the problem and carries on, so actually there was no stack trace, no logging, it just didn't work and too bad. Unsurprisingly people tasked with making things work prefer a language which doesn't do that.

            • HumanOstrich5 days ago
              How does any other language deal with POSIX standard I/O streams or the lack thereof? Definitely not a C++ or exceptions problem. Which language lets you compile a "Hello, World!" program and then execute it against a non-POSIX-compatible environment and get the correct output... somewhere?

              If you're executing against a POSIX-compatible environment, then stdin, stdout, and stderr are expected to exist and be configured properly if you want them to work[1].

              If you're executing against some other environment, like webassembly or an embedded system, then you'll already (hopefully) be using some logging and error handling approach that sends output to the correct place. Doesn't matter if you're using C, C++, .NET, Rust, Zig, etc.

              For example, webassembly is an environment without stdio streams. It's your responsibility to make sure there is a proper way to record output, even if it's just a compatibility layer that goes to console.log.

              [1]: https://pubs.opengroup.org/onlinepubs/9799919799/functions/s...

              • tialaramex5 days ago
                The other languages do not (as my parent claimed) write stacktraces to a log somehow. I suspect that in reality they've ommitted to explain that they're expected to write all the C++ code to make that stacktrace and write it to a log, but once you add those steps you're back to parity with Rust, the Rust programmers can write a stacktrace to a log too.

                In the specific case of "Hello, World" it's more embarrassing. The Rust Hello World does indeed experience and report errors if there are any, the canonical C just ignores them, as does the C++.

                • HumanOstrich5 days ago
                  > The Rust Hello World does indeed experience and report errors if there are any, the canonical C just ignores them, as does the C++.

                  Can you give an example for each of those?

                  • tialaramex5 days ago
                    • HumanOstrich5 days ago
                      Thank you for providing a reference. After reading the blog post on that page, I'm even less convinced that your point is useful.

                      I don't think it's a bug if, like in the C example, you don't handle the return value of the function you are calling. The strace shows that the function returned an error, but the code doesn't check it. Not a language flaw.

                      In fact, in most of the languages that "don't have the bug", the runtime is automagically capturing the issue and aborting the program. Like an exception. Rust just "doesn't have the bug" because the compiler forces you to handle the error. All the .NET languages do the same thing at runtime and force you to handle the I/O error... with an exception handler.

                      Unfortunately, your talking points just seem like more Rust fanaticism trying to discredit any other language. This happens in every single discussion about any language other than Rust, especially C/C++. I'm not going to engage any further.

            • tsimionescu5 days ago
              I'll go into details about your particular question, but I first want to explain why it's missing the point. The difference in terms of logging between exceptions and Rust error handling (or Haskell, or Go, or C) is unrelated to how you print out the log information. It's related to the fact that the exception object itself collects and carries the stack trace information, which the runtime populates if and when an exception happens, whereas in Rust it's up to the programmer (or some library) to manually collect this information and either print it or add it to a custom error object, at every call site. The fact that uncaught exceptions get printed to stdout is the tiniest little bonus, and irrelevant for most programs: you shouldn't have uncaught exceptions in the first place. The important thing is that whenever you catch an exception, you know for sure that you'll have some useful diagnostic information about where exactly it occurred, regardless of who wrote the code between here and there.

              Now on to your specific question.

              First of all, I explicitly called out C++ exceptions as not having this useful property. C++ exceptions don't collect a stack trace, and the C++ runtime simply exits with an error code if an exception is thrown without a handler.

              Now, moving to any other language with exceptions. What happens by default if executing in an environment without stdout will depend on details of the runtime of that language for that environment.

              But let's assume that the runtime is not written to handle this gracefully. Here's the entirety of the code you need to add to your exception-based program to handle a lack of stdout and still get stack traces, in pseudo-code:

                int main() {
                   try {
                     return oldMain();
                   } catch (Exception e) {
                     with(File f = openFile("my-log.log")) {
                       f.write("Unhandled exception:");
                       e.printStackTrace(f);
                     }
                   }
                }
              
              Where oldMain() is the main() you'd write for the same program if you did have stdout.
              • whytevuhuni5 days ago
                You seem to be arguing more for stack traces than for exceptions?

                Rust can store backtraces in value objects as well [0]. It's opt-in (capturing a stack trace at the error value's creation may be expensive if that error is eventually handled), but with the anyhow crate you get a decent compromise: a stack trace is captured at the boundary of your program and libraries during the conversion, and then shown only if the error bubbles up to main().

                And you get the bonus of storing both the stack trace, and relevant context where needed, e.g. to show values of parameters. Here's how that playground example above fails:

                  Error: Second try
                  
                  Caused by:
                      0: Parsing 'forty-two' as number
                      1: invalid digit found in string
                  
                  Stack backtrace:
                     0: anyhow::error::<impl core::convert::From<E> for anyhow::Error>::from
                               at ./.cargo/registry/src/index.crates.io-6f17d22bba15001f/anyhow-1.0.94/src/backtrace.rs:27:14
                     1: <core::result::Result<T,F> as core::ops::try_trait::FromResidual<core::result::Result<core::convert::Infallible,E>>>::from_residual
                               at ./.rustup/toolchains/stable-x86_64-unknown-linux-gnu/lib/rustlib/src/rust/library/core/src/result.rs:2009:27
                     2: playground::parse_number
                               at ./src/main.rs:25:8
                     3: playground::parse_and_increment
                               at ./src/main.rs:18:18
                     4: playground::main
                               at ./src/main.rs:7:19
                     ...
                
                [0] https://play.rust-lang.org/?version=stable&mode=debug&editio...
                • tsimionescu5 days ago
                  Cool, I didn't know about RUST_BACKTRACE = 1. That addresses the core of my comment, yes. I will note that some runtimes (like Java or C#, I believe) don't compute the stack trace unless and until it is requested, which means that exceptions that are caught and handled without being logged shouldn't incur the performance cost - thus removing most of the reason you may want to have a way to disable this behavior.

                  I did know about anyhow, that was exactly the library I was mentioning. But that requires manually adding context at all places where the error is passed.

                  • neonsunset5 days ago
                    Exceptions thrown in both Java and .NET eagerly compute the stack trace and the text associated with it. They are also very expensive compared to the happy path. Historically, manually thrown exceptions in OpenJDK have been cheaper than .NET's (although .NET 9 makes them twice as cheap) while NPE's in OpenJDK are much more expensive than regular Java exceptions or .NET's NREs

                    In Java, you can disable stack traces altogether that massively reduces the cost (which is what e.g. Crafting Interpreters suggests - it's a good course but the author is both wrong and actively misleading about cost model of implementations covered in parts 1 and 2 because of this) but few codebases do this.

    • nox1015 days ago
      > Easy way to use libraries

      This is both a blessing and a curse. Seeing the rust docs require 561 crates makes it clear that rust/cargo is headed down the same path as node/npm

           Downloaded 561 crates (50.7 MB) in 5.21s (largest was `libsqlite3-sys` at 5.1 MB)
      • dralley5 days ago
        By "rust docs" you seem to mean "docs.rs, the website that hosts documentation for all crates in the Rust ecosystem", which is a little bit different than the impression you give.

        It's a whole web services with crates.io webhooks to build and update new documentation every time a crates gets updated, tracks state in a database and stores data on S3, etc. Obviously if you just want to build some docs for one crate yourself you don't need any of that. The "rustdoc" command has a much smaller list of dependencies.

      • pornel5 days ago
        Cargo is 10 years old, and it's been working great. It has already proven that it's on a different path than npm.

        * Rust has a strong type system, with good encapsulation and immutability by default, so the library interfaces are much less fragile than in JS. There's tooling for documenting APIs and checking SemVer compat.

        * Rust takes stability more seriously than Node.js. Node makes SemVer-major releases regularly, and for a long time had awful churn from unstable C++ API.

        * Cargo/crates-io has a good design, and a robust implementation. It had a chance to learn from npm's mistakes, and avoid them before they happened (e.g. it had a policy preventing left-pad from day one).

        And the number of deps looks high, but it isn't what it seems. Rust projects tend split themselves into many small packages, even when they all are part of the same project written by the same people.

        Cargo makes all transitive dependencies very visible. In C you depend on pre-built dynamic libraries, so you just don't see what they depend on, and what their dependencies depend on.

        For example, Rust's reqwest shows up as 150 transitive dependencies, but it has fewer supported protocols, fewer features, and less code overall than a 1 dep of libcurl.

      • zdragnar5 days ago
        Almost all of the things that were wrong with NPM were self inflicted. No name spacing packages by default, allowing packages to be deleted / removed without approval, specifying install ranges and poor lock file implementation and so on.

        There's an argument to be made that there are too many packages from too many authors to trust everything. I don't find the argument to be too convincing, because we can play what-if games all day long, and if you don't want to use them, you get to write your own.

        • skydhash5 days ago
          The issue is micro-packages. Instead of a few layers between the os and your code, you find yourself with a wide dependency tree, with so many projects that it’s impossible to audit.
          • dwattttt5 days ago
            An alternative of "now everyone who uses a linked list has their own mostly-the-same, but-just-different-enough" list.c and list.h files that need separate auditing (if you care) isn't better.
            • skydhash5 days ago
              If list.c is part of the project, it’s easier because you don’t have to hunt down every dependency’s repository. It’s much easier to audit and trust 5 projects/orgs, than 50 different entities.
              • iknowstuffa day ago
                When you work on rust, in any IDE you can click through any type and see its implementation, even if its within a dependency. No difference in auditing, except you also get the guarantee of `cargo vet`.
              • dwattttt5 days ago
                50 different dependencies covers a _lot_ more behaviour than a list.c. The point would be to audit a list package, and have audited it for all users, rather than all users needing to audit their own.
      • spullara5 days ago
        this is good actually
    • jvanderbot5 days ago
      Add to this: trait system vs deep OOP.

      Really nice macro system.

      First class serde.

      First class sync/send

      Derives!

      • Asraelite5 days ago
        > First class serde.

        What do you mean? `Serialize` and `Deserialize` are not part of std.

        • tialaramex5 days ago
          It's true, they're not part of the standard library. Nevertheless, it is conventional to provide implementations for things you reasonably expect your users might want to serialize and deserialize. Standard guidance includes telling you to name a feature flag (if you want one for this) serde and not something else so as to reduce extra work for your users.

          Because Rust's package ecosystem is more robust it's less anxious about the strict line between things everybody must have (in the standard library) and things most people want (maybe or maybe not in the standard library). In C++ there's a powerful urge to land everything you might need in the stdlib, so that it's available.

          For example the FreeBSD base system includes C++. They're not keen on adding to their base system, so for example they seem disinclined to take Rust, but when each C++ ISO standard bolts in whatever new random nonsense well that's part of C++ so it's in the base system for free. Weird data structure a game dev wants? An entire linear algebra system from Fortran? Comprehensive SI unit systems? It's not up to the FreeBSD gatekeepers, a WG21 vote gets all of those huge requirements into FreeBSD anyway.

          • jvanderbot5 days ago
            This was a conscious decision by Rust folks. Let the language and std libraries be small enough to target anything - and let well-established crates (most written/started by the rust folks) fill in functionality. The main language provided the baseline interfaces, in some cases (see async), but not the machinery (e.g., async runtimes).
          • loeg5 days ago
            FWIW C++ in FreeBSD is a little contentious. The overall system build time is dominated by Clang, with the rest of FreeBSD "a wart on the side." In base, the C++ compiler was pretty much only used for devd (something vaguely like Linux' udev), and devd is written in a pre-C++11 dialect -- no new features. Using more of it isn't exactly encouraged; it's not allowed in the kernel.

            There are two significant barriers to Rust in FreeBSD base -- first, cultural: it's just a bunch of greybeards opposed to anything and everything new; and second, technical: Rust just doesn't (or didn't) have compiler backends for the same subset of platforms FreeBSD does (or did). (This situation is improving as FreeBSD finally drops official support for obsolete SPARC, 32-bit ARM, MIPS, and 32-bit PowerPC platforms, but obviously cultural barriers remain.)

      • pjmlp5 days ago
        "Applying Traits to the Smalltalk Collection Classes", 2003

        https://rmod-files.lille.inria.fr/Team/Texts/Papers/Blac03a-...

        Traits as CS concept, are part of OOP paradigm.

        • rmgk5 days ago
          Traits in Rust are more a variant of Haskell typeclasses than of Smalltalk traits.

          The whole FP vs OOP distinction does make little sense these days, as it has mostly been shown that each concept from the one can neatly fit within the other and vice versa.

        • sshine5 days ago
          Traits as CS concept, are part of FP paradigm.

          Reverse Uno!

        • kccqzy5 days ago
          The traits concept mentioned in your link looks very different from Rust traits. It describes something more akin to Java interfaces.
          • pjmlp5 days ago
            Java interfaces are based on Objective-C protocols.

            The only big difference is how implementation is mapped into the trait specification.

            • kccqzy5 days ago
              And that's the problem isn't it? Rust traits are based on GHC type classes, not at all from either Java or Objective-C or Smalltalk.
              • pjmlp5 days ago
                Thankfully this fellow Simon Peyton Jones has a talk about how they map into OOP paradigm.

                "Classes, Jim, But Not as We Know Them — Type Classes in Haskell: What, Why, and Whither"

                https://www.microsoft.com/en-us/research/publication/classes...

                "Adventure with Types in Haskell"

                https://www.youtube.com/watch?v=6COvD8oynmI

                https://www.youtube.com/watch?v=brE_dyedGm0

                On the first lecture he discusses how Haskell relates to OOP in regards of subtyping and generic polymorphism and how although different on the surface they share those CS concepts in their own ways.

                • kccqzy5 days ago
                  No. Did you read the contents of the links you shared? The name of the slides in your first link is "Classes, Jim, but not as we know them". And let me quote from the slides in your first link:

                  From slide 40:

                  > So the links to intensional polymorphism are closer than the links to OOP.

                  From the first bullet of slide 43:

                  > No problem with multiple constraints

                  > f :: (Num a, Show a) => a -> ...

                  From the second bullet:

                  > Existing types can retroactively be made instances of new type classes (e.g. introduce new Wibble class, make existing types an instance of it):

                  > class Wibble a where

                  > wib :: a -> Bool

                  > instance Wibble Int where

                  > wib n = n+1

                  From slide 46:

                  > In Haskell you must anticipate the need to act on arguments of various type

                  > f :: Tree -> Int

                  > vs

                  > f’ :: Treelike a => a -> Int

                  > (in OO you can retroactively sub-class Tree)

                  From slide 50:

                  > In Java (ish):

                  > inc :: Numable -> Numable

                  > from any sub-type of Numable to any super-type of Numable

                  > In Haskell:

                  > inc :: Num a => a -> a

                  > Result has precisely same type as argument

                  I appreciate you sharing informative links even though they prove you wrong. I haven't seen this set of slides before but I find it a very good concise explanation of why Haskell classes are not traditional OOP classes or interfaces.

                  • pjmlp5 days ago
                    I didn't say they were exactly 100% the same thing, and from those videos starting at 1:01:00, I advise the section "Two approaches to polymorphism", including the overlapped set of features.
                    • kccqzy5 days ago
                      We are commenting on an article titled "Great things about Rust that aren't just performance" and it's clear to me that one of the great things being mentioned is how Rust approaches polymorphism that's different from the typical way in Java or Objective-C. So it is more important to highlight the differences rather than the similarities.

                      Think about it: if the Rust trait system were highly similar to Java interfaces, why would people rave about it?

        • jvanderbot5 days ago
          There are shades of OOP, and while you're technically correct I think the meaning of my post is clear.
    • Animats5 days ago
      > Rust is basically a bag of sensible choices.

      Mostly yes. In C/C++, the defaults are usually in the less safe direction for historical reasons.

      • tialaramex5 days ago
        It's not about less safe, the C++ defaults are usually just wrong. It's so well known that Phil Nash had to make clear whether he was giving the same talk about how all the defaults are wrong at CppCon or a different talk, otherwise who knows.

        For some cases you can make an argument that the right default would have been safer. For mutability, for avoiding deductions, these are both sometimes footguns. But in other cases the right default isn't so much safer as just plain better, the single argument constructors should default to explicit for example, all the functions which qualify as constexpr might as well be constexpr by default, there's no benefit remaining for the contrary.

        My favourite wrong default is the memory ordering. The default memory ordering in C++ is Sequentially Consistent. This default doesn't seem obviously wrong, what would have been better? Surely we don't want Relaxed? And we can't always mean Release, or Acquire, and in some cases the combination Acquire-Release means nothing, so that's bad too. Thus, how can Sequentially Consistent be the wrong default? Easy - having a default was wrong. All the options were a mistake, the moment the committee voted they'd already fucked up.

    • mananaysiempre5 days ago
      > Match needs to be exhaustive. When you add something to the enum you were matching, it chokes. This is good.

      There’s a reason why ML and Haskell compilers generally have that as a warning by default and not an error: when you need a pipeline of small transformations of very similar languages, the easiest way to go is usually declare one tree type that’s the union of all of them, then ignore the impossible cases at each stage. This takes the problem entirely out of the type system, true, but an ergonomic alternative for that hasn’t been invented, as far as I know. Well, aside from the micropass framework in Scheme, I guess, but that requires exactly the kind of rich macros that Rust goes out of its way to make ugly. (There have been other attempts in the Haskell world, like SYB, but I haven’t seen one that wouldn’t be awkward.)

    • jimbob455 days ago
      Move by default. If you came from c++, I think this makes a lot of sense. If you have a new language, don't bring the baggage.

      Came from C++ and this is my least favorite part of the language ergonomics.

      • tialaramex5 days ago
        It actually doesn't come from C++ and what C++ has is worse, the history is interesting.

        The move assignment semantic you see in Rust was also retrospectively termed "destructive" move because after the assignment A = B not only is the value from B now in A - that value is gone from B, B was in some sense "destroyed". If we write code which does A = B and then print(B) it won't compile! B is gone now.

        Programmers actually really like that, it feels natural (with appropriate compiler support of course) and it doesn't have unexpected horrors to be uncovered.

        In C++ they couldn't make that work (without destroying compatibility with existing C++ 98 code) so they invented their own C++ 11 "move" which is this more fundamental move plus making a new hollow object to go in B. This new hollow object allows the normal lifecycle of C++ 98 objects to happen as before - B goes out of scope, it gets destroyed.

        So in C++ A = B; print(B) works - but it's not defined to do anything useful, you get some ready to clean up object, if B was a string maybe it's the empty string, if B was a remote file server then... maybe it's an "empty" remote file server? That's awkward.

        It's worth understanding that the nicer Rust move isn't a novelty, or something people had no idea they wanted when C++ 11 was standardized, the "destructive" move already existed and was known to be a good idea - but C++ couldn't figure out a way to deliver it.

        • repelsteeltje5 days ago
          I think the main motivation for adding move semantics to c++11 was performance. Ie. Eliminate superfluous copy constructors when passing a std::string temporary into a function.

          Std::move, std::forward are neat, though somewhat cumbersome compared to Rust. C++ scope, lifetime plus the fact that std::move doesn't actually move are real footguns.

          There have been attempts to add destructive moves (Circle) but it's a long way from Rust's ergonomics.

          I concur with op that default move semantic is where rust shines.

      • eddd-ddde5 days ago
        Why? It makes you use smart pointers correctly from the start. Any big c++ codebase would do this anyway, except it isn't as error prone.
    • dataflow5 days ago
      > Move by default. If you came from c++, I think this makes a lot of sense.

      > Immutable by default.

      In C++, these two fight each other. You can't (for the most part) move from something that's immutable.

      How does Rust handle this? I assume it drops immutability upon the move, and that doesn't affect optimizations because the variable is unused thereafter?

      • NobodyNada5 days ago
        In Rust, when you move out of a variable, that variable is now effectively out-of-scope; trying to access it will result in a compile error.

        Mutability in Rust is an attribute of a location; not a value, so you can indeed move a value from an immutable location into a mutable one, thus "dropping immutability". (But you can only move out of a location that you have exclusive access to -- you can't move out of an & reference, for example -- so the effect is purely local.)

        • dataflow5 days ago
          Yeah that sounds about like what I expected. Thanks!
      • lightingthedark5 days ago
        Rust moves aren't quite the same as C++ moves, you can think of them more like a memcpy where the destructor (if there is one) doesn't get run on the original location. This means you can move an immutable object, the object itself doesn't have to do anything to be moved.
      • remram5 days ago
        You can't refer to any old location so there is no observable mutation. For example you can't move if a reference exists.
      • baq5 days ago
        Not 100% sure but sounds like you want Pin<>?
    • OJFord5 days ago
      > Testing is part of the code, doesn't seem tacked on like it does in c++.

      Or most languages! Many could easily imitate it too. I'd love a pytest mode or similar framework for python that looked for doc tests and has a 'ModTest' or something class.

    • belter5 days ago
      > There's a few large firms that don't use exceptions in c++

      Google: https://google.github.io/styleguide/cppguide.html#Exceptions

      • dataflow5 days ago
        Just make sure you read the whole darn thing:

        > Given that Google's existing code is not exception-tolerant, the costs of using exceptions are somewhat greater than the costs in a new project.

        > ...Things would probably be different if we had to do it all over again from scratch.

        It's quite ironic to cite the Google C++ Style Guide as somehow supporting the case against exceptions. It's saying the opposite: we would probably use exceptions, but it's too late now, and we can't.

        Somehow people miss this...

        • jandrewrogers5 days ago
          I can't remember the last time I worked on a C++ code base at any company that used exceptions. This is for good reason; making some types of systems-y code exception-safe can be tricky to get right and comes with a performance cost. For many companies the juice is not worth the squeeze.
          • dataflow5 days ago
            > This is for good reason; making some types of systems-y code exception-safe can be tricky to get right and comes with a performance cost. For many companies the juice is not worth the squeeze.

            Those types of systems-y code can avoid exceptions if they want. Nobody said exceptions are a panacea. The alternative error models have their own performance and other problems, and those can manifest differently to other types of codebases.

        • nox1015 days ago
          exceptions in C++ are a foot gun. Even the top C++ gurus/leaders know this and are trying to find some new solution

          https://www.youtube.com/watch?v=os7cqJ5qlzo

          • dataflow5 days ago
            Thanks for the 1-hour video. Could you link to the timestamp of the strongest argument(s) you see in the video that are relevant in the current discussion (i.e. the existing error models we're talking about in Rust and C++, rather than a hypothetical future one)?

            Just from a quick glance: I see he's talking about things like stack overflows and std::bad_alloc. In a discussion like this, those two are probably the worst examples of exceptions. They're the most severe exceptions, and the one the fewest people care to actually catch, and the ones that error codes are possibly the worst at handling anyway. (Do you really want an error returned from push_back?) The most common stuff is I/O errors, permission errors, format errors, etc. which aren't well represented by resource exhaustion at all, much less memory exhaustion.

            P.S. W.r.t. "the top C++ gurus/leaders" - Herb is certainly talented, but I should note that the folks who wrote Google's style guide are... not amateurs. They have been involved in the language development and standardization process too. And they're just as well aware of the benefits and footguns as anyone.

            • dwattttt5 days ago
              The general problem cited with exceptions is that they're un-obvious control flow. The impact it has is clearer in Rust, because of the higher bar it sets for safety/correctness.

              As a specific example, and this is something that's been a problem in the std lib before. When you code something that needs to maintain an invariant, e.g. a length field for an unsafe operation, that invariant has to be upheld on every path out of your function.

              In the absence of exceptions, you just need to make sure your length is correct on returns from your function.

              With exceptions, exits from your function are now any function call that could raise an exception; this is way harder to deal with in the general case. You can add one exception handler to your function, but it needs to deal with fixing up your invariant wherever the exception occurred (e.g. of the fix-up operation that needs to happen is different based on where in your function the exception occurred).

              To avoid that you can wrap every call that can cause an exception so you can do the specific cleanup that needs to happen at that point in the function... But at that point what's the benefit of exceptions?

              • dataflow5 days ago
                > With exceptions, exits from your function are now any function call that could raise an exception; this is way harder to deal with in the general case. You can add one exception handler to your function [...] To avoid that you can wrap every call [...]

                That's the wrong way to handle this though. The correct way (in most cases) is with RAII. See scope guards (std::experimental::scope_exit, absl::Cleanup, etc.) if you need helpers. Those are not "way harder" to deal with, and whether the control flow out of the function is obvious or not is completely irrelevant to them -- in fact, that's kind of their point.

                In fact, they're better than both exception handling and error codes in at least one respect: they actually put the cleanup code next to the setup code, making it harder for them to go out of sync.

                • dwattttt5 days ago
                  None of those are easier than not needing to do it at all though; if your functions exits are only where you specify, you can cleanup only once on those paths.
                  • dataflow5 days ago
                    > None of those are easier than not needing to do it at all though; if your functions exits are only where you specify, you can cleanup only once on those paths.

                    Huh? I don't get it. This:

                      stack.push_back(k);
                      absl::Cleanup _ = [&] { assert(stack.back() == k); stack.pop_back(); }
                      if (foo()) {
                        printf("foo()\n");
                        return 1;
                      }
                      if (bar()) {
                        printf("bar()\n");
                        return 2;
                      }
                      baz();
                      return 3;
                    
                    is both easier, more readable, and more robust than:

                      stack.push_back(k);
                      if (foo()) {
                        printf("foo()\n");
                        assert(stack.back() == k);
                        stack.pop_back();
                        return 1;
                      }
                      if (bar()) {
                        printf("bar()\n");
                        assert(stack.back() == k);
                        stack.pop_back();
                        return 2;
                      }
                      baz();
                      assert(stack.back() == k);
                      stack.pop_back();
                      return 3;
                    
                    as well as:

                      stack.push_back(k);
                      auto pop_stack = [&] { assert(stack.back() == k); stack.pop_back(); }
                      if (foo()) {
                        printf("foo()\n");
                        pop_stack();
                        return 1;
                      }
                      if (bar()) {
                        printf("bar()\n");
                        pop_stack();
                        return 2;
                      }
                      baz();
                      pop_stack();
                      return 3;
                    
                    and unlike the others, it avoids repeating the same code three times.

                    (Ironically, I missed the manual cleanups before the final returns in the last two examples right as I posted this comment. Edited to fix now, but that itself should say something about which approach is actually more bug-prone...)

                    • dwattttt5 days ago
                      I can't parse this super well on mobile, but what invariant is this maintaining? I was imagining a function that manipulated a collection, and e.g. needed to decrement a length field to ensure the observable elements remained valid, then increment it, then do something else.

                      The gnarliest scenario I recall was a ring-buffer implementation that relied on a field always being within the valid length, and a single code path not performing a mod operation, which was only observably a problem after a specific sequence of reserving, popping, and pushing.

                      EDIT: oh, I think I see; is your code validating the invariant, or maintaining it?

                      • dataflow5 days ago
                        > I can't parse this super well on mobile, but what invariant is this maintaining.

                        The stack length (and contents, too). It pushes, but ensures a pop occurs upon returning. So the stack looks the same before and after.

                        > I was imagining a function that manipulated a collection, and e.g. needed to decrement a length field to ensure the observable elements remained valid, then increment it, then do something else.

                        That is exactly what the code is doing.

                        > EDIT: oh, I think I see; is your code validating the invariant, or maintaining it?

                        Both. First it manipulates the stack (pushing onto it), then it does some stuff. Then before returning, it validates that the last element is still the one pushed, then pops that element, returning the stack to its original length & state.

                        > The gnarliest scenario I recall was a ring-buffer implementation that [...]

                        That sounds like the kind of thing scope guards would be good at.

                        • dwattttt5 days ago
                          Then I think the counter-example is where function calls that can't fail are interspersed. Those are the cases where with exceptions (outside checked exceptions) you have to assume they could fail, and in a language without exceptions you can rely on them not to fail, and skip adding any code to maintain the invariant between them.

                          E.g. in the case you provided, if pop & push couldn't fail, that would just be two calls in sequence.

                          • dataflow5 days ago
                            I still don't follow, I'm sorry.

                            > E.g. in the case you provided, if pop & push couldn't fail, that would just be two calls in sequence.

                            I have no idea what you mean here. Everything in the comment would be exactly the same even if stack.push_back() was guaranteed to succeed (maybe due to a prior stack.reserve()). And those calls aren't occurring in sequence, one is occurring upon entrance and the other upon exit. Perhaps you're confused what absl::Cleanup does? Or I'm not sure what you mean.

                            I think you're going to have to give a code example if/when you have the chance, to illustrate what you mean.

                            But also, even if you find "a counterexample" where something else is better than exceptions just means you finally found found a case where there's a different tool for a (different) job. Just like how me finding a counterexample where exceptions are better doesn't mean exceptions are always better. You simply can't extrapolate from that to exceptions being bad in general, is kind of my whole point.

                            • dwattttt5 days ago
                              Apologies, I believe I meant if the foo/bar/baz calls couldn't fail. If there's no exceptions, you don't need the cleanup block, but in the presence of exceptions you have to assume they (and all calls) can fail.

                              The problem re. there being a counter-example to exceptions (as implemented in C++) is that they're not opt-in or out where it makes sense. At least as I understand it, there's no way for foo/bar/baz to guarantee to you that they can't throw an exception, so you can rely on it (e.g. in a way that if this changes, you get a compiler error such that something you were relying on has changed). noexcept just results in the process being terminated on exception right?

                              • dataflow5 days ago
                                > I meant if the foo/bar/baz calls couldn't fail. If there's no exceptions, you don't need the cleanup block

                                First, I think you're making an incorrect assumption -- the assumption that "if (foo())" means "if foo() failed". That's not what it means at all. They could just as well be infallible functions doing things like:

                                  if (tasks.empty()) {
                                    printf("Nothing to do\n");
                                    return 1;
                                  }
                                
                                or

                                  if (items.size() == 1) {
                                    return items[0];
                                  }
                                
                                Second, even ignoring that, you'd still need the cleanup block! The fact that it is next to the setup statement (i.e. locality of information) is extremely important from a code health and maintainability standpoint. Having setup & cleanup be a dozen lines away from each other makes it far too easy to miss or forget one of them when the other is modified. Putting them next to each other prevents them from going out of sync and diverging over time.

                                Finally, your foo() and bar() calls can actually begin to fail in the future when more functionality is added to them. Heck, they might call a user callback that performs arbitrary tasks. Then your code will break too. Whereas with the cleanup block, it'll continue to be correct.

                                What you're doing is simplifying code by making very strong and brittle -- not to mention unguaranteed in almost all cases -- assumptions on how it looks during initial authorship, and assuming the code will remain static throughout the lifetime of your own code. In that context, putting them together seems "unnecessary", yeah. But point-in-time programming is not software engineering. The situation is radically different when you factor in what can go wrong during updates and maintenance.

                                • dwattttt5 days ago
                                  > Moreover, your foo() and bar() calls can actually begin to fail in the future when more functionality is added to them. Heck, they might call a user callback that performs arbitrary tasks. Then your code will break too. Whereas with the cleanup block, it'll continue to be correct.

                                  In a language without exceptions, I'm also assuming that a function conveys whether it can fail via it's prototype; in Rust, changing a function from "returns nothing" to "returns a Result" will result in a warning that you're not handling it

                                  > What you're doing is simplifying code by making very strong assumptions on how it looks during initial authorship, and assuming the code will remain static throughout the lifetime of your own code.

                                  But this is where the burden of exceptions is most pronounced; if you code as if everything can fail, there's no "additional" burden, you're paying it all the time. The case you're missing is in the simpler side, where it's possible for something to not fail, and that if that changes, your compiler tells you.

                                  It can even become quite a great boon, because infallibility is transitive; if every operation you do can't fail, you can't fail.

                                  • dataflow5 days ago
                                    No. I've mentioned this multiple times but I feel like you're still missing what I'm saying about maintainability. (You didn't even reply to it at all.)

                                    To be very clear, I was explaining why, even if you somehow have a guarantee here that absolutely nothing ever fails, this code:

                                      stack.push_back(k);
                                      absl::Cleanup _ = [&] { assert(stack.back() == k); stack.pop_back(); }
                                      foo();
                                      bar();
                                      baz();
                                      return 3;
                                    
                                    is still better than this code w.r.t. maintainability and robustness:

                                      stack.push_back(k);
                                      foo();
                                      bar();
                                      baz();
                                      assert(stack.back() == k);
                                      stack.pop_back();
                                      return 3;
                                    
                                    The reason, as I explained above, is the following:

                                    >> The fact that it is next to the setup statement (i.e. locality of information) is extremely important from a code health and maintainability standpoint. Having setup & cleanup be a dozen lines away from each other makes it far too easy to miss or forget one of them when the other is modified. Putting them next to each other prevents them from going out of sync and diverging over time.

                                    Fallibility is absolutely irrelevant to this point. It's about not splitting the source of truth into two separate spots in the code. This technique kills multiple birds at once, and handling errors better in the aforementioned cases is merely one of its benefits, but you should be doing it regardless.

                                    Do you see what I mean?

                                    • dwattttt5 days ago
                                      I do, but I'm still expecting things to be more complicated than that example.

                                      For instance, this is the the scenario I expect to be harder to manage with exceptions & cleanup:

                                        this.len += 1;
                                        foo();
                                        this.len += 1;
                                        bar();
                                        this.len += 1;
                                        baz();
                                        return ...;
                                      
                                      
                                      Without infallibility, you need a separate cleanup scope for each call you make. With this, the change to the private variable is still next to the operation that changes it, you just don't need to manage another control flow at the same time.

                                      EDIT: sorry, had the len's in the wrong spot before

                                      • dataflow5 days ago
                                        > I do, but I'm still expecting things to be more complicated than that example.

                                        They're not. I've done this all the time, in the vast majority of cases it's perfectly fine. It sounds like you might not have tried this in practice -- I would recommend giving it a shot before judging it, it's quite an improvement in quality of life once you're used to it.

                                        But in any large codebase you're going to find occasional situations complicated enough to obviate whatever generic solution anyone made for you. In the worst case you'll legitimately need gotos or inline assembly. That's life, nobody says everything has a canned solution. You can't make sweeping arguments about entire coding patterns just because you can come up with the edge cases.

                                        > Without infallibility, you need a separate cleanup scope for each call you make.

                                        So your goal here is to restore the length, and you're assuming everything is infallible (as inadvisable as that often is)? The solution is still pretty darn simple:

                                          absl::Cleanup _ = [&, old_len = len] { len = old_len; };
                                          foo();
                                          this.len += 1;
                                          bar();
                                          this.len += 1;
                                          baz();
                                          this.len += 1;
                                          return ...;
                                        
                                        No need for a separate cleanup for every increment.
                                        • dwattttt5 days ago
                                          We may have to agree to disagree. I'm trying to convey a function that would need a different cleanup to occur after each call if they were to fail, e.g. reducing the len by one (though that is the same here too).
                                          • dataflow5 days ago
                                            > We may have to agree to disagree. I'm trying to convey a function that would need a different cleanup to occur after each call if they were to fail, e.g. reducing the len by one (though that is the same here too).

                                            Your parenthetical is kind of my point though. It's rare to need mid-function cleanups that somehow contradict the earlier ones (because logically this often doesn't make sense), and when that is legitimately necessary, those are also fairly trivial to handle in most cases.

                                            I'm happy to just agree to disagree and avoid providing more examples for this so we can lay the discussion to rest, so I'll leave with this: try all of these techniques -- not necessarily at work, but at least on other projects -- for a while and try to get familiar with their limitations (as well as how you'd have to work around them, if/when you encounter them) before you judge which ones are better or worse. Everything I can see mentioned here, I've tried in C++ for a while. This includes the static enforcement of error handling that you mentioned Rust has. (You can get it in C++ too, see [1].) Every technique has its limitations, and I know of some for this, but overall it's pretty decent and kills a lot of birds with one stone, making it worth the occasional cost in those rare scenarios. I can even think of other (stronger!) counterarguments I find more compelling against exceptions than the ones I see cited here, but even then I don't think they warrant avoiding exceptions entirely.

                                            If there's one thing I've learned, it's that (a) sweeping generalizations are wrong regardless of the direction they're pointed at, as they often are (this statement itself being an exception), and (b) there's always room for improvement nevertheless, and I look forward to better techniques coming along that are superior to all the ones we've discussed.

                                            [1] https://godbolt.org/z/c9KM6dj95

            • SubjectToChange5 days ago
              >Just from a quick glance: I see he's talking about things like stack overflows and std::bad_alloc.

              There are specific scenarios that a major issue, yes. But as the title of the video implies, the problem with exceptions runs far deeper. Imagine being a C++ library author who wants to support as many users as possible, you simply couldn't use exceptions even if you wanted to, and even if most of your users are using exceptions. The end result is that projects that use exceptions have to deal with two different methods of error handling, i.e. they get the worst of both worlds (the binary footprint of exceptions, the overhead of constantly checking error codes, and the mental overhead of dealing with it all).

              C++ exceptions are a genuinely useful language feature. But I wish the language and standard library wasn't designed around exceptions. C++ has managed to displace C almost everywhere except embedded and/or kernel programming, and exceptions are a big reason for that.

              • dataflow5 days ago
                > Imagine being a C++ library author who wants to support as many users as possible, you simply couldn't use exceptions even if you wanted to

                I'm pretty sure that (much) less than 50% of the C++ code out there is "a C++ library that wants to support as many users as possible" -- I imagine most code is application code, not even C++ library code in the first place. It's perfectly fine to throw e.g. a "network connection was closed" or "failed to write to disk" exception and then catch it somewhere up the stack.

                > The end result is that projects that use exceptions have to deal with two different methods of error handling. i.e. they get the worst of both worlds

                No, that's not true. You might get a bit of marginal overhead to think about, but it's not the worst of both whatsoever. If you want to use exceptions and your library doesn't use them, all you gotta do is wrap the foo() call in CheckForErrors(foo()), and then handle it (if you want to handle it at all) at the top level of your call chain. It's not the worst of both worlds at all -- in fact it's literally less work than simply writing

                  std::expected<Result, std::error_code> e = foo();
                
                and on top of that you get to avoid the constant checking of error codes and modifying every intermediate caller, leaving their code much simpler and more readable.

                And of course if you don't want to use exceptions but your library does use them, then of course you can do the reverse:

                  std::expected<Result, std::error_code> e = CallAndCatchError(foo()).
                
                Nobody is claiming every error should be an exception. I'm just saying you're exaggerating and extrapolating the arguments too far. A sane project would have a mix of different error models, and that would very much still be the case if none of the problems you mentioned existed at all, because they're different tools solving different problems.
            • tialaramex5 days ago
              > Do you really want an error returned from push_back?

              For most people, no, you definitely want it to just work or explode, which is indeed what happens in normal Rust, and, not coincidentally, the actual effect when this exception happens in your typical C++ application after it is done with all the unwinding and discovers there is no handler (or that the handler was never tested and doesn't actually somehow cope).

              But, sometimes that is what you wanted, and Linus has been very clear it's what he wants in the kernel he created.

              For such purposes Rust has Vec::try_reserve() and Vec::push_within_capacity() which let us express the idea that we'd like more room and to know if that wasn't possible, and also if there was no room left for the thing we pushed we want back the thing we were trying to push - which otherwise we don't have any more.

              There is no analogous C++ API, std::vector just throws an exception and good luck to you AFAIK.

              • dataflow5 days ago
                > For such purposes Rust has Vec::try_reserve() and Vec::push_within_capacity() [...] There is no analogous C++ API, std::vector just throws an exception and good luck to you AFAIK.

                https://godbolt.org/z/6xE6jr3zr ?

                • tialaramex5 days ago
                  I guess this is an attempt at Vec::push_within_capacity ? Your function takes a reference and then tries to copy the referenced object into the growable array. But of course nobody said this object can be copied - after all we want it back if it won't fit so perhaps it's unique or expensive to make.
                  • dataflow5 days ago
                    > I guess this is an attempt at Vec::push_within_capacity?

                    Sure, yes. It's trivial to change to try_reserve if that's what you want. (There are other solutions for that as well, but they're more complicated and better for other situations.)

                    > Your function takes a reference and then tries to copy the referenced object into the growable array. But of course nobody said this object can be copied - after all we want it back if it won't fit so perhaps it's unique or expensive to make

                    Just add extend it to allow moves then? It's pretty trivial. (Are you familiar with move semantics in C++?)

                    • tialaramex5 days ago
                      But how? I did attempt this before I replied, but of course after not long I had inexplicable segfaults and we're not in a thread about those problems with C++

                      I can't see how to make that work, but I also can't say for sure it's impossible all I can tell you is that I was genuinely trying and all I got for my trouble was a segfault that I don't understand and couldn't fix.

                      Edited to add: In case it helps the signature we want is:

                          pub fn push_within_capacity(&mut self, value: T) -> Result<(), T>
                      
                      If you're not really a Rust person, this takes a value T, not a reference, not a magic ultra-hyper-reference, nor a pointer, it's taking the value T, the value is gone now, which just isn't a thing in C++, then it's returning either Ok(()) which signifies that this worked, or Err(T) thus giving back the T because we couldn't push it.
                      • dataflow5 days ago
                        I'm sorry I don't think I understand the problem you're trying to illustrate. I'm not sure why you're emphasizing value vs. reference, but even if that's what you want, this works just fine: https://godbolt.org/z/P8EGPYWW5
                        • tialaramex5 days ago
                          Well the good news is that now I realise the biggest problem in my previous attempt was that I forgot C++ types which can't be copy constructed also by default can't be moved, so I'd actually made it impossible to use my example type. I still don't know why I had segfaults, but I don't care now.

                          I agree that your new code does roughly what you'd do in C++ if you wanted this, but you get to the same place as before -- if for example you try commenting out your allocation failure boolean, the code just blows up now.

                          There are lots of APIs like this which make sense in Rust but not in C++ because if you write them in Rust the programmer is going to handle edge cases properly, but in C++ the programmer just ignores the edge cases so why bother.

                          • dataflow5 days ago
                            > I agree that your new code does roughly what you'd do in C++ if you wanted this, but you get to the same place as before -- if for example you try commenting out your allocation failure boolean, the code just blows up now. There are lots of APIs like this which make sense in Rust but not in C++ because if you write them in Rust the programmer is going to handle edge cases properly, but in C++ the programmer just ignores the edge cases so why bother.

                            Er... doesn't this blow up in Rust? https://godbolt.org/z/eaaq43voT

                              pub fn main() {
                                let mut vec = Vec::new();
                                return vec.push_within_capacity(1).unwrap();
                              }
                            • tialaramex5 days ago
                              Almost, it panics because we didn't handle the error case. Of course this won't pass review because we explicitly just said "I won't handle this" and the reviewer can see that - whereas the C++ programmer wordlessly allowed this. Subtle, isn't it.

                              "But I can write correct C++" is trivially true because it's a Turing Complete language, and at the same time entirely useless unless you're playing "Um, actually".

                              • dataflow5 days ago
                                > Almost, it panics because we didn't handle the error case. Of course this won't pass review because we explicitly just said "I won't handle this" and the reviewer can see that - whereas the C++ programmer wordlessly allowed this. Subtle, isn't it. "But I can write correct C++" is trivially true because it's a Turing Complete language, and at the same time entirely useless unless you're playing "Um, actually".

                                I'm sorry, what? How in the world did you go from "exceptions are worse than error codes" to "that's why Linus doesn't like C++, he wants to write push_within_capacity() in C++ without exceptions and it's impossible" to "oh but your version doesn't move" to "oh I guess moving is possible too... but if you modified it to be buggy then it would crash" to "oh I see Rust would crash too... but it's OK because Rust programmers wouldn't actually let .unwrap() through code review"?? Aren't there .unwrap() calls in the standard library itself, never mind other libraries? So next we have "Oh I guess .unwrap() actually does through code review... but it's OK because Rust programmers wouldn't write such bugs, unlike C++ programmers"?

                                • tialaramex4 days ago
                                  I don't remember telling you "Exceptions are worse than error codes" as these both seem like bad ideas from people with either a PDP/11 or no imagination or both. Result isn't an error code. std::expected isn't an error code either.

                                  Among the things Linus doesn't like about C++ are its quiet allocations and its hidden control flow, both of which are implicated here - I think those are both bad ideas too, but in this case I'm just the messenger, I didn't write an OS kernel (at least, not a real one people use) so I don't need a way to handle not being able to push items onto a growable array.

                                  The problem isn't that "if you modified it to be buggy then it would crash" as you've described, the problem is that only your toy demo works, once we modify unrelated things like no longer setting that global to true the demo blows up spectacularly (Undefined Behaviour) whereas of course the Rust just reported an error.

                                  > Aren't there .unwrap() calls in the standard library itself

                                  Unsurprisingly an operating system kernel does not use std, only core and some of alloc. So we're actually talking only about core† and alloc, not the rest of std. There are indeed a few places where core calls unwrap(), cases where we know that'll do what we meant so if you wrote what you meant by hand Clippy (at least if we weren't in core) would say you should just write unwrap here instead.

                                  † As a C++ person you can think of core as equivalent to the C++ standard library "freestanding" mode. This is more true in the very modern era because reformists got a lot of crucial improvements into this mode whereas for years it had felt abandoned. So if you mostly work with say C++ 17, think "freestanding" but actually properly maintained.

                                  We can't write unwrap here because it's not what we meant, so that's why it shouldn't pass review.

                      • 5 days ago
                        undefined
          • spacechild15 days ago
            > exceptions in C++ are a foot gun

            How are they a foot gun? It's not like C++ is the only language with exceptions. So what is particularly dangerous about C++ exceptions?

            > trying to find some new solution

            C++23 already has std::expected (= result type).

    • prmph5 days ago
      So when are we going to get a proper application (not systems) programming language with all these nice things about Rust?
    • synergy205 days ago
      agree on all these though i ended up using golang for faster development
  • ninkendo5 days ago
    > I can express a lot in Python, but I don't trust the code as much without robust tests.

    This is a major part of why I like languages like rust. I can do some pretty fearless refactoring that looks something like:

    - Oh hey, there’s a string in this struct but I really need an enum of 3 possible values. Lemme just make that enum and change the field.

    - Cargo tells me it broke call sites in these 5 places. This is now my todo list.

    - At each of the 5 places, figure out what the appropriate value is for the enum and send that instead of the string.

    - Oh, one of those places needs more context to know what to send, so I’ll add another parameter to the function params

    - That broke 3 other places. That’s now my to-do list.

    Repeat until it compiles, and 99.9% of the time you’re done.

    With non-statically-typed languages you’re on your own trying to find the todo list above. If you have 100% test coverage you may be okay but even then it may miss edge cases that the type checker gets right away. Oh and even then, it’s likely that your 100% test coverage is spent writing a ton of useless tests that a type checker would give you automatically.

    As nice as weakly/dynamically typed languages are to prototype greenfield code in, they lose very quickly once you have to maintain/refactor.

    • Cyph0n5 days ago
      And if say one of your enum variants expects a string reference (pointer), the borrow checker will guide you through ensuring that the reference you pass in is valid at all callsites.

      Importantly, no tests are required to guarantee that the refactor is safe - although no guarantees that it’s logically correct.

      On the other hand, doing this exercise in a different low-level language involves a lot more “thinking” instead of just following the compiler’s complaints :)

    • Aeolos5 days ago
      With rust, I treat compiler errors as ultra-fast unit tests. I share the same experience: once it compiles, 99.9% it works fine on first try. It's a wonderful development experience.
      • lblume5 days ago
        I completely agree, if it is really 100% Rust, or some great high-level bindings. Else it just becomes C++ with nicer syntax imo, and if my code isn't anything too fancy I could just write it in Python which likely has even more ergonomic bindings.

        In my free time I code 90+% in Rust, but for some areas, like OR (SAT, MILP, CSP), ML or CAS Python seems to be the better choice because types don't matter too much and if your code works, it works.

    • NetOpWibby5 days ago
      I feel this, but with Typescript (coming from jQuery and then ES4/5). I love how it forces you to code well.

      You can change your tsconfig to ignore the strictness but I don’t.

      • lblume5 days ago
        By default, strictness is opt-in with TypeScript, and many JS APIs, especially older ones, don't even have types yet.

        Having a type system from the start that cannot be disabled and that forces you to always think of types instead of allowing sprinkling 'as any' when the code works but doesn't compile which is a major annoyance, is a huge benefit in my opinion.

        • augusto-moura5 days ago
          All JS APIs are typed (browser, nodejs, etc.), if you meant libraries, yes, not all of them are typed. But the vast majority have community types in DefinitelyTyped. Also, it is trivial to type an unknown library yourself, or at least type only the relevant parts for your work.
        • zachrip5 days ago
          > and many JS APIs, especially older ones, don't even have types yet.

          This is pretty much not the case these days, the packages people use mostly have types.

          • lblume5 days ago
            Exactly, if they are used enough that someone declared the types in a @types subrepo. Sometimes these are excellent. However, I sometimes work with code in fairly niche domains written in pure JS that can pretty much return anything depending on the input (not necessary even input types), rendering even these bindings very hard to write and not ergonomic at all.

            And this sometimes holds for even fairly popular libraries, like d3.js which I sometimes use for visualization. The idiosyncratic API design for object manipulation, selecting DOM nodes by string id and doing stuff based on their associated data, just doesn't really work in a strongly-typed context without 50% of the code being unreadable casts. And d3 is still trying at least to be somewhat type-safe, unlike other libraries.

    • humanrebar5 days ago
      For what it's worth, I get the same experience you're describing with statically typed python wired up to mypy. Not that rust and python have the same feature set in other ways.
    • timeon5 days ago
      > - Oh hey, there’s a string in this struct but I really need an enum of 3 possible values. Lemme just make that enum and change the field. ...

      Heh I just did this today. Rust is really good language to prototype and refactor in.

    • jimbob455 days ago
      With Visual Studio and C#, you can do that with a built-in wizard without even having to compile.

      It scares me how good C# is these days. Every killer feature of Rust and Lisp is already in C# or started there. Visual Studio makes VSCode look like a 90s shareware tool. Even the governance, by MS of all entities, is somehow less controversial than Rust’s.

      • tialaramex5 days ago
        It's really not true that "every killer feature of Rust and Lisp is already in C#" although it's certainly true that C# is a nicer language today than it was twenty years ago.

        Sum types are a must-have for me. I don't want to write software without sum types. In C# you can add third party libraries to mostly simulate sum types, or you can choose a style where you avoid some of the worst pitfalls from only having product types and a simple enumeration, but either is a poor shadow to Rust having them as a core language feature.

        Also VS is a sprawling beast, I spend almost as much time in the search function of Visual Studio finding where a solution I've seen lives as I do hand solving a similar problem in Vim. I spend the time because in Vim the editor won't get in my way when I solve it by hand, while VS absolutely might "helpfully" insert unrelated nonsense as I type if I don't use the "proper" tools buried in page 4 of tab 6 of a panel of the Option->Config->Preferences->Options->More Options->Other section or whatever.

        Visual Studio is what would happen if Microsoft asked 250 developers each for their best idea for a new VS feature and then did it, every year for the past several decades, without fail. No need for these features to work together or make sense as a coherent whole, they're new features so therefore the whole package is better, right? It's like a metaphor for bad engineering practice for every Windows programmer to see.

        • neonsunset5 days ago
          Use VSC instead. The higher-level alternative to Rust is more so F# than C# as it has comparably powerful type system with different tradeoffs (gradual typing and full type inference across function boundaries - it's less verbose to write than Rust because of this). Otherwise C# is not tied to VS at all because all the tools that it provides have alternatives either in Rider/JB suite and/or in a self-contained CLI way together with just using VS Code. .NET's CLI is very similar to Cargo in either case.
      • macagain5 days ago
        Visual Studio ONLY works well on Windows, the mac version is not the same, and certain features just does NOT work as well as the Windows version. And having a language tight down to an IDE which is tight down to a proprietary OS is a deal breaker for me! No matter how good it is these days with every killer feature of Rust and Lisp.

        I would rather use a more raw unrefined version of tech that is open source. So my code and DX is not at wimp of some corporate over lord! And given MS's track record I do

        • neonsunset5 days ago
          C# with Rider is a more comprehensive experience than Rust + RA + VSC. C# in VSC is about on the same level with Rust regardless of the platform.

          I always considered C#, F# and Rust as languages complementary to each other since each has their own distinct domain and use cases despite a good degree of overlap. Much less so than Java/Kotlin and Golang or any interpreted language (except Python and JS/TS in front-end) which are made obsolete by using the first three.

      • gfna5 days ago
        c# has been my go to language for everything except frontends for the past 15 years, but there are still some things I really miss from Rust. The top one is probably pattern matching. Sure C# has something similar with switch expressions, but with them you must assign something, and they cannot contain code blocks. Related to this something like enum variants is also missing, and therefore making something similar to Result or Option is not really feasible without it being quite hacky. Also being able to create new types from existing ones with eg struct Years(i64); and pass it around typed is quite nice in Rust (F# has something similar, however there it will then always also be assignable to i64, so is not very helpful for catching incorrect usage.
        • arwhatever5 days ago
          I’d be eager to hear folks poke holes in this opinion, but it seems like C# has made a wild mess of class/record initialization in recent versions, whereas in Rust fields are either simply required or are optional.
          • neonsunset5 days ago
            Properties in C# follow a similar pattern. Constructors can have their own arguments which can also be mandatory or optional. In Rust this logic simply lives in a function instead.

            The only issue in C# is that structs come with a. default parameterless constructor (which can be overridden) and b. can be default-intialized (default(T)) without the compiler complaining by default. These two aspects are not ideal and cannot be walked back as they were introduced in ancient times but are rarely an issue in practice (and you can further use an analyzer to disallow either).

            F# is more strict about it however and does not have such gaps.

            • arwhatever5 days ago
              There’s also (in C#) init, readonly, required, and quite a few more keywords and techniques, governing property mutability, private class field mutability, and so on.

              And then none of those techniques work as well as manually typing out a required constructor which hard-enforces that required data be priced upon object initialization.

              I understand required vs optional immediately a la Rust and F# (ignoring for a moment F#’s null awareness) but as a 17 year C# dev, I’ve had to create an initialization chart to keep straight all of the C# techniques.

        • neonsunset5 days ago
          > Also being able to create new types from existing ones with eg struct Years(i64); and pass it around typed is quite nice in Rust (F# has something similar, however there it will then always also be assignable to i64, so is not very helpful for catching incorrect usage.

          F# has units of measure which are quite a bit more powerful: https://learn.microsoft.com/en-us/dotnet/fsharp/language-ref...

    • dragonwriter5 days ago
      > This is a major part of why I like languages like rust. I can do some pretty fearless refactoring that looks something like:

      The process you describe (with “compile” replaced with “typecheck”) works fine for me in Python, with Pylance (and/or mypy) in VSCode.

      > With non-statically-typed languages you’re on your own trying to find the todo list above.

      This would be more accurately “in workflows without typechecking”, it’s not really about language features except that, long ago, it was uncommon for languages where running the code didn't rely on a compilation step that made use of type information to have typechecking tools available for use in development environments, and lots of people seem to be stuck in viewpoints anchored in that past.

      • ninkendo5 days ago
        I don’t see a need to get so hung up on the nomenclature… it should be obvious that if you’re using a separate type checker in python, then of course you’ll also get the benefits I describe. The distinction I’m drawing is of course “has type checking” vs “does not hand type checking”, and it seems like you simply agree with me.

        The problem with python, ruby, JavaScript, and similar languages is that while yes, they have optional type checkers you can use… they were invented after the fact, and not everyone uses them, and it’s not mandatory to use them. The library you want to use may not have type information, etc. It’s a world of difference when the language has it mandatory from the start.

        And that’s not even getting into how (a) damned good rust’s type checker is (b) the borrow checker, which makes the whole check process at least twice as valuable as type checking alone.

      • mardifoufs5 days ago
        I think that's still a different process. I really love pylance, but the issue is that while it can make your code almost as good as it can be with a compiled (statically typed) language if you use strict typechecking, it still can't make up for the issues that come with any library you use. Some popular packages are well annotated but some aren't, meaning that it's just not as good as soon as you start using 3rd party packages.
  • the__alchemist5 days ago
    The pros/cons of rust add up better than other languages. The people who I hear (recently: Jon Blow) throw spears are usually correct, but what they're missing is that you could throw pointier ones at the alternatives. Some examples:

      - Best mutability ergonomics of any language. E.g. `&mut` in a function parameter means the funciton can mutate it; `&` means it can't. This might be my favorite part of rust, despite sounding obvious. Few languages have equivalents. (C++ and D are exceptions).
      - Easy building and dependency management
      - No header files
      - Best error messages of any language (This is addressed explicitly in the article)
      - Struct + Enums together are a fantastic baseline for refactorable, self-consistent code.
      - As fast as any
      - Great overall syntax tradeoffs. There are things I don't like (e.g. Having to manually put Clone, Copy, and PartialEq on each simple enum, and having to manually write `Default` if I need a custom impl on one field), but it overall is better than alternatives.
    
    Rust enthusiasts online are often unpleasant, and it's perhaps their fault people are put off by the language. They repeat things like "fearless concurrency" "if it compiles, it works", and "that code is unsafe/unsound" without critically thinking. Or, they overstate rust as a memory-safety one-trick, while ignoring the overall language advantages.

    Tangent: Async rust is not my cup of tea for ergonomics and compatibility reasons. I have reason to believe that many people who like it think Async is synonymous with concurrent processes and nonblocking code.

    • huijzer5 days ago
      > Rust enthusiasts online are often unpleasant, and it's perhaps their fault people are put off by the language.

      I guess this is also a bit in the eyes of the beholder. It seems that any group that is enthusiastic about something new is “unpleasant” nowadays.

    • iknowstuff5 days ago
      I like async rust because its an engineering marvel. There are other ways to do it, but they are all worse, slower, and less versatile: https://embassy.dev/

      https://without.boats/blog/why-async-rust/

      • wavemode5 days ago
        > There are other ways to do it, but they are all worse, slower, and less versatile

        I would disagree with this, personally. Due to being tacked onto the language after the fact, the design of Rust's async made a number of concessions in order to fit into the language (for example, it had to work around the pre-existing restriction that all types are moveable by default).

        But you're correct that no current popular language has yet developed anything better.

      • 5 days ago
        undefined
      • the__alchemist5 days ago
        I do lots of embedded; not an Embassy fan for the classic Async reasons. Disagree on being worse, slower, and less versatile. That is wrong on slower; the other two traits are subjective. The embassy creator is a great programmer, and we see eye-to-eye on typestates and HAL unification, but not async.
    • Thorrez5 days ago
      >Few languages have equivalents. (C++ and D are exceptions).

      I would say Rust is still better than C++ here, because in Rust const is default. In C++, people often either forget to write const, or intentionally don't write const because writing it everywhere clutters up the code.

      • berkut5 days ago
        > or intentionally don't write const because writing it everywhere clutters up the code

        I don't often like being judgemental (at least publicly!), but I'd argue that's just people being very bad developers...

        You could argue having to add '&mut' at call sites everywhere (i.e. opposite to the way C++ does const in terms of call site vs target site) also clutters up the code in terms of how verbose it is, but it's still largely a good thing.

    • tonyedgecombe5 days ago
      >Rust enthusiasts online are often unpleasant

      The Ruby community seems the nicest, I wonder why that is.

  • 8s2ngy5 days ago
    Even though I am not as proficient with Rust as I am with the languages I use at work, it has quickly become my favorite language as I dive deeper into it. It has helped me connect many dots that were fuzzy in my mind, like static typing, enums, pattern matching, traits, compile-time safety checks, and so on. A fair argument can be made that these are not unique to Rust, but it presents them in a cohesive package that I have not seen in any other language. Add to that its well-integrated build system and the enthusiastic community of developers who are passionate about it, and its appeal is undeniable. The lessons I gain from Rust directly translate into a better way of reasoning about code written in other languages.
  • IX-1035 days ago
    One thing I don't like about Rust is implicit function return. Flow control should always be explicit. Given the other choices made in Rust to make flow control explicit (such as the absence of exceptions), I was surprised to find this choice.

    Fortunately the "return" keyword is allowed so I include it in my code to make it explicit. I just have to remember to look for it in any other code I'm reviewing.

    • joshfee5 days ago
      Returning control to the caller only really needs to be explicit if you're doing it in an arbitrary spot in the middle of the function because if you're at the end there's nothing to do _other than_ return control. For instance in other languages you don't need to explicitly say "go to the next iteration" at the end of a for loop, and if you want to do it before the end of the loop body you can `continue`. If "flow control should always be explicit", then should we be writing `continue` at the end of our loop blocks?

      I think the other part of it is that it is just part of a cohesive language design where everything is an expression, including things like if's, matches, etc that would be control flow statements in other languages.. It would be a little weird to say that functions are the only thing that have different semantics.

    • tonyedgecombe5 days ago
      I don't think match would work so well if blocks weren't expressions.
  • mhsdef5 days ago
    Nails it. I'm so tired fighting incidental complexity--our tools.

    Speed is great but let me just focus on the business problems and write something durable.

  • adityajha365 days ago
    Coming from a Python background, working with data heavy analytics in finance, one of my biggest Rust-is-the-one moment has been because of Polars. It lets me do everything I could do with Pandas, but with added speed and safety. Lack of such packages (as far as I'm aware) in Java, C++ and other languages keeps an entire ecosystem of data-heavy workloads in Python. But most Python projects I've found are very hard to scale beyond the prototyping phases.
  • Devasta5 days ago
    For me, the amazing error messages are what makes it great. I am not a professional dev, but I regularly contribute to an open source project because of them.

    I would never consider sending PRs in another language, how would I know If I am wasting the projects time by contributing bad code? With rust though, I have clippy and the compiler helping me along the way, like pair programming, I can be fairly confident I'm sending something useful.

  • sesm5 days ago
    I'll swallow the bait and try asking some sceptical questions here.

    My understanding of Rust memory management is that move semantics and default lifetime-checked pointers are used for single threaded code, but for multi-threaded code Rust uses smart pointers like C++, roughly Arc = shared_ptr, Weak = weak_ptr, Box = unique_ptr.

    My question is: what extra static checks Arc has over shared_ptr? Same for Weak over weak_ptr, and Box over unique_ptr.

    • koito175 days ago
      Here's a static check Rust's Box<T> offers over C++'s std::unique_ptr<T>.

      The following program is obviously incorrect to someone familiar with smart pointers. The code compiles without error, and the program crashes as expected.

        % cat demo.cpp                                   
        #include <iostream>
        #include <memory>
      
        int main() {
          std::unique_ptr<std::string> foo = std::make_unique<std::string>("bar");
          std::unique_ptr<std::string> bar = std::move(foo);
      
          std::cout << *foo << *bar << std::endl;
        }
        
        % clang -std=c++2b -lstdc++ -Weverything demo.cpp
        warning: include location '/usr/local/include' is unsafe for cross-compilation [-Wpoison-system-directories]
        1 warning generated.
      
        % ./a.out 
        zsh: segmentation fault  ./a.out
      
      
      The equivalent Rust code fails to compile.

        % cat demo.rs
        fn main() {
          let foo = Box::new("bar");
          let bar = foo;
      
          println!("{foo} {bar}")
        }
      
        % rustc demo.rs
        error[E0382]: borrow of moved value: `foo`
         --> demo.rs:5:13
          |
        2 |   let foo = Box::new("bar");
          |       --- move occurs because `foo` has type `Box<&str>`, which does not implement the `Copy` trait
        3 |   let bar = foo;
          |             --- value moved here
        4 |
        5 |   println!("{foo} {bar}")
          |             ^^^^^ value borrowed here after move
      
        help: consider cloning the value if the performance cost is acceptable
          |
        3 |   let bar = foo.clone();
          |                ++++++++
      
      Not only does Rust emit an error, but it even suggests a fix for the error.
      • sesm5 days ago
        That's a very good example, thank you!
    • calo_star5 days ago
      > but for multi-threaded code Rust uses smart pointers like C++

      That's not the whole story. There's also Send and Sync marker traits, move by default semantic also makes RAII constructs like Mutex<T> less error prone to use.

    • ninkendo5 days ago
      The big deal is that Rust will refuse to compile if you forget to use the appropriate smart pointer. You can’t just accidentally forget that you left a mutable reference in another thread: if you want a thread to use a reference, Rust will ensure you don’t accidentally let another thread access it simultaneously.
    • CJefferson5 days ago
      In rust, you still can’t get mutable access to any object in two threads at the same time in a non-thread safe way.

      In “very rough c++ish”, stuff in a shared ptr is immutable unless it is also protected by a mutex.

      • sesm5 days ago
        Ok, so do I understand correctly that Arc<MyType> would be read-only, and for write access I'll have to use Arc<Mutex<MyType>> or Arc<RwLock<MyType>>? So what about Mutex and RwLock, do they have any static checks associated with them? Do they introduce an extra layer of pointer indirection, or Rust resolves Arc<Mutex> and Arc<RwLock> to 2 different implementations with only 1 layer of indirection?
        • remram5 days ago
          The extra static checks are the mutable vs non-mutable references, and the Sync/Send traits. Mutex does not introduce indirection.
          • sesm5 days ago
            From Rust official docs:

            `let lock = Arc::new(Mutex::new(0_u32));`

            Doesn't this mean that Mutex introduces one more pointer?

            For example, in Java every Object has a built-in mutex, adding some memory overhead in order to remove one extra layer of pointer dereferencing. As far as I understand, Rust introduces an extra layer of pointer indirection with Mutex, which can hurt performance significantly with cache misses.

            • dannymi5 days ago
              I would suggest you read the source code of Mutex <https://doc.rust-lang.org/src/std/sync/mutex.rs.html#178-182> and then of UnsafeCell <https://doc.rust-lang.org/std/cell/struct.UnsafeCell.html>.

              So, the layout of Mutex<T> is the same as T and then some lock (well, obviously).

              >Rust introduces an extra layer of pointer indirection with Mutex, which can hurt performance significantly with cache misses.

              Why would there be an extra pointer dereference? There isn't.

              • sesm5 days ago
                Thanks for the explanation!
            • kbolino5 days ago
              It is only Arc and not Mutex that allocates and thus has "extra" pointer indirection. A Mutex can live perfectly well on the stack, as long as it outlives the threads accessing it. Arc has to allocate, because it is meant to outlive the function that created it, and the only place to safely do that is on the heap.
              • 5 days ago
                undefined
            • oasisaimlessly5 days ago
              > Doesn't this mean that Mutex introduces one more pointer?

              No. That syntax is roughly equivalent to the following C++:

                  auto const lock = std::make_shared<std::pair<std::mutex, uint32_t>>(
                      std::piecewise_construct,
                      std::make_tuple(),
                      std::make_tuple(0));
              • sesm5 days ago
                Thanks, that's the best explanation!
  • isodev5 days ago
    Wholeheartedly agree. One can just be at ease that the compiler has it covered and one can focus on coding. Both the tooling and the syntax for these protections is kind of “out of the way” as well - I don’t have to learn extra keywords just to convince the compiler that what I’m doing is “safe”, it can do that by itself.
  • ausbah5 days ago
    without even reading the article hands down the expressiveness of algebraic data types and pattern matching it enables is my absolute favorite part of rust
  • dmezzetti5 days ago
    Rust is a fine language and good for certain use cases. There is a segment of developers where the syntax and complexity is a bridge too far. The community tends to evangelize the greatness of Rust too much and be unrealistic with this reality.
  • jurgenkesker5 days ago
    I enjoy Rust, but have settled on Kotlin for my language of joy. I use it in my day job for Android, but also recently started converting personal project backends and APIs to it (mainly from Ruby). I really like the ease and joy Kotlin gives me, and if I need a very high performance or very low level project/library, I'll write it in Rust. Rust is too slow for me (when coding), so a bit too low level I guess. In Kotlin I can express everything I want.
    • 8s2ngy5 days ago
      Two things concern me about Kotlin:

      1. The language and its future are heavily intertwined with JetBrains and their motivations. It's difficult to say whether this is a good thing or bad, but issues like the one discussed at https://discuss.kotlinlang.org/t/any-plan-for-supporting-lan... don't inspire confidence.

      2. Java seems to be moving ahead at a rapid pace and is slowly absorbing many of the features that once distinguished Kotlin. This makes it difficult to jusify introducin Kotlin at a company where Java is heavily used.

    • yodsanklai5 days ago
      > I enjoy Rust, but have settled on Kotlin for my language of joy.

      Isn't it comparing apples and oranges? Is there any good reason to use Rust if you can live with a GC?

      • joshfee5 days ago
        The whole premise of this article is non-performance reasons to love rust
      • hooli_gan5 days ago
        Yeah, like startup time and (cross-)compiling to a static binary. Although the compile time can get annoying.
    • akkad335 days ago
      What feature do you use Kotlin for that does not exist in modern Java (21+)?
      • lblume5 days ago
        Complete by-default null safety is a big point, extension functions are just nice, smart casts, proper data classes and operator overloading, and simple expressive functional stuff like range syntax and reified generic types for inline functions. In general the Kotlin language feels more usable, yet less bloated (wrt the actual code, not the features ofc).
        • akkad333 days ago
          Those are actually very good features. I guess null safety will come to Java with project Valhalla.
  • pryelluw5 days ago
    I’ve been focusing more and more on rust for my personal projects and I agree with all of these. What im waiting for is the Django equivalent in rust. Dunno if its already here but I’m hoping.

    The one thing I do enjoy in rust is how you don’t need an excessive amount of tests to ensure it runs fairly correctly. I’ve spend more time writing tests in the last ten years using Python/js than writing actual code. Such a waste of productivity.

    • ku1ik5 days ago
      I would argue web dev is just not Rust’s sweet spot, and I wouldn’t hold my breath for „Django for Rust”.
      • pryelluw5 days ago
        That’s what people said about Python before Django. They asked why not use PHP or Perl. But Django came along and established a strong set of patterns that make web dev much easier these days.
        • vkazanov5 days ago
          As somebody who wrote maybe half a million lines of python code, I still must say that it is Rails that became a model.

          Django was (is?) the default tool for python, but i don't remember it being discussed as widely.

          • pryelluw5 days ago
            Yes, rails definitely in a bigger scale. Django within pythons ecosystem.

            IME fastapi has eroded some of Django’s hold but it’s still chugging along nicely. Hype has certainly died down because it is ancient by today’s standards. Still a very good tool and quite a lot of work available around it.

    • the__alchemist5 days ago
      Concur. I don't do web backends in rust for this reason. There are some good Flask analogs though.
      • pryelluw5 days ago
        Mind sharing what libraries you use ?
        • Aeolos5 days ago
          Not the same poster, but I am using Axum (webserver) + Maud (templating engine) + SeaORM (db adapter) + HTMX and I'm finding the end-result highly productive.

          Even though Rust is more verbose, and SeaORM has a few quirks, I am making faster progress in Rust than my existing mature Typescript + Node + apollo-graphql + ReactJS setup. Once I was over the initial setup & learning curve (about a week), I find myself able to spend more time on business logic and less time hunting runtime bugs and random test failures. There's something almost magical about being able to refactor code and getting it up and running in a matter of minutes/hours, compared to days for similar operations in Typescript.

          It's definitely still a young ecosystem that desperately needs a Django equivalent (loco.rs is worth keeping an eye out, but it's not there yet.) But I'm willing to tackle a bit of immaturity & contribute upstream to avoid the constant needless churn of the js world.

        • the__alchemist5 days ago
          For web? Django.
  • xvfLJfx95 days ago
    For me it is the excellent tooling. Great package management, good compiler errors, automatic formatting and enforcing if guidelines etc.
  • meltyness5 days ago
    Definitely will take the Pepsi challenge on 'go doc' with 'cargo doc'.

    Go doc had a bunch of, puzzling things going on / barely working, Rust doc is pretty much as described in the book and reference.

    Although in the C ecosystem, Doxygen is pretty nice, the docs there have to be 3d to account for the way C code bases can work.

  • slekker5 days ago
    The "billion dollar mistake" in Go is a non-issue: there's no security consequences (it panics safely) and it is the easiest kind of error to fix, it tells you exactly where the null pointer is.
    • hansvm5 days ago
      There's nothing I like more in the middle of the night than being paged about some skiddie making the webserver repeatedly panic ... safely.

      Surely you recognize the benefit in that sort of thing being pushed to a compiler error?

    • kolektiv5 days ago
      I'd agree with that if I hadn't had commercial products written in Go crash in production. They're the kind of errors that slip through the net, and "it's easy to fix" is shall comfort at 2am staring at a downstream 500 error.
    • continuational5 days ago
      Null pointer panics rarely happen where the null is introduced.

      Instead the null propagates somewhere else where you assume non-null, and you get the panic there.

      That's bad error reporting, and it only happens because Go lacks a proper nullable/option type.

    • kbolino5 days ago
      The problem with nil in Go is that there's no ergonomic way to deal with it. Neither the language nor the type system are amenable to anything better. The lack of the ternary operator or any other conditional value expressions means that handling nil properly always adds at least three lines of code for every dot in an expression. Even generics don't help much because "nil" is actually 5 different things (nil pointer, nil slice, nil map, nil channel, nil interface) none of which are interchangeable. And strings can't be nil, which probably seemed like a cool idea at first but now just means you see *string all over APIs even though string is a fat pointer under the hood.
    • VBprogrammer5 days ago
      Which is great if you are the exact person, team, and organisation which wrote the code and / or has access to the source code as well as the time and knowledge to know-how to fix it.
  • jtwaleson5 days ago
    I really enjoy Rust so far. I've been writing a relatively simple web app on Loco.rs with it, backed by a very fast multi-threaded git history parser. It's super fast and safe and catches most problems with compilation checks.

    My two main problems now are 1) that AI code assistants (Claude 3.5 Sonnet) + IDE support (VS Code / Cursor AI) are still much worse than with frontend frameworks like React/VueJS. The AI suggestions are mostly terrible. 2) Compilation is really really slow. There is no hot reload, and it often takes about 1 or 2 minutes for my new code to be live in my dev server. It's a real flow-state killer. It's a bit ironic as all the Python/Javascript frameworks are now super fast because they've been rebuilt on Rust.

  • tpoacher5 days ago
    Meanwhile, bryan lunduke blogged this today: https://lunduke.substack.com/p/massive-memory-leaks-in-syste...
    • demurgos5 days ago
      I haven't finished listening yet; but it should be made clear that Rust claimed to prevent double frees and use-after-free errors but never claimed to prevent memory leaks. Memory leaks were actually decided to be fully safe just before Rust 1.0, and there are multiple safe methods to leak memory. Search for "leakpocalypse" if you want more details. I also spent the day fixing a memory leak in an Angular (JS) project: you can create memory leaks in almost any language, including memory-safe ones (even garbage-collected ones).

      The tone of the podcast seems needlessly incendiary.

      EDIT: He actually quotes a part of the doc were they say that "memory leaks are hard but still possible". It's disappointing that Cosmic has leaks; but it's more about Cosmic than the language IMO.

      • lblume5 days ago
        You can't really accidentally cause memory leaks without being made aware that your code may leak by the compiler and/or docs. The "easiest" way to do it accidentally would be an Rc loop, but that is harder to write than Rc::new_cyclic which gives you the memory-correct Weak reference. When you stick to a fully owned tree memory model (or use bump allocators for fancier data structures) things usually turn out very easy, and I pretty much never had to worry about this.
        • demurgos5 days ago
          > You can't really accidentally cause memory leaks without being made aware that your code may leak by the compiler and/or docs.

          It's true if you stick to the standard library, but I assume that the Cosmic frameworks adds a layer of complexity. I'm not familiar with their codebase, but I can easily see how an Rc cycle can appear behind a system with lots of shared references (e.g. callbacks, GUI components with backlinks, etc.). You can also get memory leaks through caches without a clear eviction policy. You can also get cases of "memory amplification" where you hold an Rc to a larger struct despite only needing a tiny amount of data from it.

          Basically, I agree that the standard library tries to steer you away from memory leaks. However, I also understand that it's not foolproof and can see how you can get in a situation with leaks when your approach is to take some tech debt to avoid short-term delays.

    • remram5 days ago
      Leaking memory is not what is referred to as "memory unsafe". This is clickbait.
    • mplanchard5 days ago
      Memory leaks have nothing to do with memory safety. This is covered in most introductory Rust material. It is in fact trivial to leak memory with Box::leak()
      • tpoacher5 days ago
        Interesting, I actually didn't know this. Not a rust developer obviously.

        It does sound like the kind of factoid that should be super upfront though. Half the "better than c" comments I see here always seem to hint about c memory leaks being the big problem that rust comes to fix. So if it's not that, I honestly don't know what is being referred to by memory safety in this context then.

        • NobodyNada5 days ago
          "Memory safety" is referring to safety from null-pointer dereferences, use-after-frees, buffer overflows, data races, invalid pointer casts, etc.

          Safe Rust has no undefined behavior. Memory leaks are bad but they don't cause undefined behavior (your program might use more memory than it needs, but an attacker can't gain remote code execution from a memory leak). So Rust has tools (such as RAII) to help prevent accidental memory leaks, but it doesn't guarantee absolute freedom from memory leaks.

        • mplanchard5 days ago
          Re: comments about relative safety to C, I don't know specifically what kind of comments you're talking about, but Rust's memory safety guarantees are:

          - No data races: it is impossible in safe Rust for two threads to simultaneously mutate memory without appropriate guards

          - No reads/writes from memory that is not owned/allocated

          - Any location in memory may only have a single mutable reference at a time (and any number of immutable references)

          Combined, you eliminate a large class of memory-related bugs, including use-after-frees, double frees, and buffer under/overflows.

        • mplanchard5 days ago
          It's fairly up front in this portion of "the book," which is the standard learning material for new Rust programmers: https://doc.rust-lang.org/book/ch15-06-reference-cycles.html
        • 5 days ago
          undefined
  • threeseed5 days ago
    Surprised more people don't talk about panics and third party libraries.

    So a panic is when something happens that shouldn't and you want the app to just die. But the problem is that third party libraries can do this as well. And there is no way to wrap this behaviour.

    For example, I used a PDF library that would panic when the file was doing something not in the spec. And rather than me being able to put up a dialog that said "this PDF is invalid" my entire process would die. Not great for a desktop app.

    It is one of the more insane situations I've ever seen in programming in 30+ years. You literally have to beg third party developers to consider what is best for you rather than them.

    • qchris5 days ago
      There's a such a degree of entitlement to this comment.

      > And there is no way to wrap this behaviour. [..]

      As a sibling comment mentioned, this is possible with std::panic::catch_unwind. That is prominent in the std::panic documentation (literally the first function for std::panic) and if you Google "rust stop panics", the first Stack Overflow result (third down on the page for me) describes this directly. Just about anyone who had put in a modicum of good-faith effort would have found this quickly.

      > You literally have to beg third party developers to consider what is best for you rather than them.

      I'm assuming this means third-party developers that you're paying and have signed a support contract with? Because if you mean any of the three Rust PDF libraries that I just looked at, those are written by open source developers who have no obligation to consider what is best for you instead of them, owe you exactly nothing, and for whom you should be, if anything, only thanking for doing some of the initial legwork that allows you to use that library at all. If you'd like a change, make a pull request or fork the library.

      > It is one of the more insane situations I've ever seen in programming in 30+ years.

      Great. You've been in the field a while; nothing written about should surprise you.

      • prmph5 days ago
        > This function might not catch all Rust panics. A Rust panic is not always implemented via unwinding, but can be implemented by aborting the process as well. This function only catches unwinding panics, not those that abort the process.
        • qchris5 days ago
          The default behavior is unwind, and unless the library is targeting something like bare metal embedded, in all likelihood will never resort to an aborting panic.

          I'll bet $20 USD to the open source project of your choice that the authors of whatever PDF library was being referenced here did not go out of their way to abort on panic, and that it's just a normal unwind.

          • threeseed5 days ago
            But it stops being a normal unwind if I set panic=abort.

            I can legitimately want my app to fail if it’s in a bad state but not have third party libraries do this on my behalf.

      • rdsubhas5 days ago
        Unfortunately, there is a degree of entitlement in this reply as well. You've assumed they don't know what they're talking about, in fact you assumed they don't even know how to Google.
        • qchris5 days ago
          I think you're using the term "assumed" incorrectly. Per Webster's dictionary[1]:

          > Presume is used when someone is making an informed guess based on reasonable evidence. Assume is used when the guess is based on little or no evidence.

          I'm not assuming they don't know what they're talking about, I'm asserting (or presuming) that they don't know what they're talking about based on supporting evidence showing that it is possible to catch panics. Similarly, I didn't say that they didn't know how to Google. I presumed it was likely they didn't put in a good-faith effort to do so, because in my judgement if they had, it would have been trivial to find the aforementioned information per my experience having just done the same.

          [1] https://www.merriam-webster.com/dictionary/assume

          • threeseed5 days ago
            I am aware of being able to catch panics.

            But the point is that I need to now do this with every use of a third party library. And for example with pdf-rs it was happening on relatively minor things e.g. incorrect date format. And what if I want to set panic=abort on my app to prevent data corruption in my code.

            Setting panic in an app shouldn’t mean it is applied globally.

            • qchris5 days ago
              > But the point is that I need to now do this with every use of a third party library.

              Well, yes. You have to manage your dependencies (by either catching potential panics or forking/modifying them to meet your needs) or accept their behavior. You're using someone else's code for free; this is no one's responsibility but yours, nor is your convenience guaranteed. "This software is provided as is, without warranty" and whatnot.

              > And what if I want to set panic=abort on my app to prevent data corruption in my code.

              I obviously don't have direct insight into your application, but you could likely use std::process::abort if you feel that data corruption is a risk in a given circumstance (to be fair, I've never personally seen data corruption caused by an unwinding that would have been prevented with an aborting panic instead). Globally setting panic=abort is not necessarily the only approach to achieving your desired behavior.

              > Setting panic in an app shouldn’t mean it is applied globally.

              You could make a case for a more granular approach to specifying panic behavior. Sure. I don't even disagree with this. But do you see how that's moving the goalposts on your original comment? From "there's no way to wrap this behavior" to "It's possible, but I wish managing this was more convenient for my particular situation."

              • threeseed5 days ago
                > You have to manage your dependencies (by either catching potential panics or forking/modifying them to meet your needs) or accept their behavior

                And my point is that I have never had to do this with other languages before.

                Rust is the first where I need to actively worry about dependencies.

                And there is no way for me to wrap this behaviour in all cases e.g. if I set panic=abort, if the library has unique types that don't support UnwindSafe.

    • Lorak_5 days ago
      I agree that libraries should avoid panicking, but it is not true you can't do anything about it. You can wrap a call to such library into https://doc.rust-lang.org/std/panic/fn.catch_unwind.html
      • prmph5 days ago
        > This function might not catch all Rust panics. A Rust panic is not always implemented via unwinding, but can be implemented by aborting the process as well. This function only catches unwinding panics, not those that abort the process.
        • Lorak_5 days ago
          As far as I know panics abort instead of unwinding in three cases: - When a panic happens during panic unwinding - When the application (not the dependency!) sets panics to abort in Cargo.toml - When the target doesn't support unwinding.

          Which of those is the case for the desktop app described by the parent?

      • threeseed5 days ago
        Did you read the part about types needing to support UnwindSafe.

        I typically don’t control every type that I am interacting with.

        • DrMeepstera day ago
          there's a wrapper type, AssertUnwindSafe to get around it. The whole concept of unwind safety was a mistake so don't worry about using it
    • kfjawdffaw5 days ago
      Agreed. Rust crate authors DGAF and plant panics everywhere. Golang package authors are far more panic-averse which I appreciate.