169 pointsby ibobev8 days ago5 comments
  • dang7 days ago
    Related. Others?

    Effective Rust - https://news.ycombinator.com/item?id=38241974 - Nov 2023 (10 comments)

    Effective Rust (2021) - https://news.ycombinator.com/item?id=36338529 - June 2023 (204 comments)

    Edit: I've put 2024 in the title above because that's what the page currently says. But what's the most accurate year for this material?

    • Xaphiosis6 days ago
      Under Preface -> Rust Version, it says "The text is written for the 2018 edition of Rust", but it does seem released in 2024. Interesting.
  • musicnarcoman7 days ago
    While I am only a Rust novice it seems to me like the "2.2 Item 11: Implement the Drop trait for RAII patterns" could use some kind of mention of Drop-leaks. I learned about it at https://doc.rust-lang.org/nightly/nomicon/leaking.html
    • Animats7 days ago
      Rust destructors are interesting.

      - You can't export a reference to the thing you are dropping. You can do that in C++. This prevents "re-animation", where something destroyed comes back to life or is accessed beyond death. Microsoft Managed C++ (early 2000s), supported re-animation and gave it workable semantics. Bad idea, now dead.

      - This is part of why Rust destructors cannot run more than once. Less than once is possible, as mentioned above.

      - There's an obscure situation with Arc and destructors. When an Arc counts down to 0, the destructor is run. Exactly once. However, Arc countdown and destructor running are not an atomic operation. It is possible for two threads to see an Arc in a strong_count == 1 state just before the Arc counts down. Never check strong_count to see if you are "the last owner". That creates a race condition.[1] I've seen that twice now. I found race conditions that took a day of running to hit. Use strong_count only for debug print.

      - A pattern that comes up in GUI libraries and game programming involves objects that are both in some kind of index and owned by Arcs. On drop, the object should be removed from the index. This is a touchy operation. The index should use weak refs, and you have to be prepared to get an un-upgradable Weak from the index.

      - Even worse is the case where dropping an object starts a deletion of something else. If the second deletion can't be completed from within the destructor, perhaps because it requires a network transaction, it's very easy to introduce race conditions.

      [1] https://github.com/rust-lang/rust/issues/117485

      • Rusky7 days ago
        > - You can't export a reference to the thing you are dropping. You can do that in C++. This prevents "re-animation", where something destroyed comes back to life or is accessed beyond death. Microsoft Managed C++ (early 2000s), supported re-animation and gave it workable semantics. Bad idea, now dead.

        >

        > - This is part of why Rust destructors cannot run more than once. ...

        This is a very backwards way to describe this, I think. Managed C++ only supported re-animation for garbage collected objects, where it is still today a fairly normal thing for a language to support. This is why these "destructors" typically go by a different name, "finalizers." Some languages allow finalizers to run more than once, even concurrently, but this is again due to their GC design and not a natural thing to expect of a "destructor."

        The design of Drop and unmanaged C++ destructors is that they are (by default) deterministically executed before the object is deallocated. Often this deallocation is not by `delete` or `free`, which could perhaps in principle be cancelled, but by a function return popping a stack frame, or some larger object being freed, which it simply does not make sense to cancel.

      • hamandcheese7 days ago
        > Never check strong_count to see if you are "the last owner".

        This made me think of the `im` library[0] which provides some immutable/copy on write collections. The docs make it seem like they do some optimizations when they determine there is only one owner:

        > Most crucially, if you never clone the data structure, the data inside it is also never cloned, and in this case it acts just like a mutable data structure, with minimal performance differences (but still non-zero, as we still have to check for shared nodes).

        I hope this isn't prone to a similar race condition!

        [0] https://docs.rs/im/15.1.0/im/index.html

      • pjmlp6 days ago
        Managed C++ is pretty much around, kind of, as it got replaced by C++/CLI in .NET 2.0, is still used by many of us instead of dealing with P/Invoke annotations, is required by WPF infrastructure, and currently is on C++20 support level.
    • charlotte-fyi7 days ago
      The important note here is that you can't rely on Drop running in order to satisfy the SAFETY comment of an unsafe block. In practice, in safe Rust, this knowledge shouldn't really change how you write your code.
    • loeg7 days ago
      The big foot-gun here is mem::forget rather than Drop itself. Although yeah it is pretty surprising that is considered safe.
      • vlovich1237 days ago
        It’s not that surprising when you consider that “unsafe” only concerns itself with memory safety. mem::forget is not unsafe from that perspective.

        > In the past mem::forget was marked as unsafe as a sort of lint against using it, since failing to call a destructor is generally not a well-behaved thing to do (though useful for some special unsafe code). However this was generally determined to be an untenable stance to take: there are many ways to fail to call a destructor in safe code. The most famous example is creating a cycle of reference-counted pointers using interior mutability.

        • milesrout6 days ago
          Leaking memory is unsafe. It was considered unsafe for decades: a prime example of the sort of problem you get in C or C++ that you avoid with automatic memory management. Lots of real crashes, stability issues and performance issues have been caused by memory leaks over the years.

          Rust initially advertised itself as preventing leaks, which makes sense as it is supposed to have the power of automatic memory management but without the runtime overhead.

          Unfortunately, shortly before Rust's release it was discovered that there were some APIs that could cause memory corruption in the presence of memory leaks. The decision was made that memory leaks would be too complicated to fix before 1.0: it would have had to have been delayed. So the API in question was taken out and Rust people quietly memory-holed the idea that leak freedom has ever been considered part of memory safety.

          • zesterer6 days ago
            I think that's a retcon. Rust people did not "decide that leaking is safe" all of a sudden, that's cart-before-horse. Rust's memory model was still in its early stages back then and there was a belief (in hindsight, a mistaken belief) that destructors could be used as a means to guarantee memory safety. This turned out to be poorly reasoned and so, to preserve a consistent model of safety for other code, it was decided that having safety rely on the invocation of destructors was unsound. It's not possible to do this without also having leaks be safe, so that's the world as it is.

            If "is leaking memory safe?" is an issue of contention for you, I'd suggest that it's a good idea to do some reading on what memory safety is (I mean that in all sincerity, not as a dunk). Memory safety, at least by the specific and highly useful definition used by compiler developers, is intimately entangled with undefined behaviour, but memory leaking sits entirely outside this sphere. This is as true in C and C++ as it is in Rust.

            • steveklabnik6 days ago
              Another example of how your parent isn't really being accurate, memory leaks are also possible in garbage collected languages, yet they have been considered memory safe since well before Rust even existed.

              It's not as if Rust invented the term "memory safety" or gets to define it.

              • milesrout6 days ago
                Memory leaks are not possible in garbage collected languages unless you retain references to data, but by definition that isn't a memory leak, that is exactly the behaviour that you want.

                Memory leaks are situations where memory is unrecovered despite there being no path to it from any active thread.

                • steveklabnik6 days ago
                  This is the same definition game you’re accusing Rust of making. Sometimes, you retain references you do not want, and therefore, leak. It’s something that comes down to programmer intent.

                  Talking about leaks this way is absolutely normal. Take https://stackoverflow.com/questions/6470651/how-can-i-create... for tons of examples.

                • vlovich1235 days ago
                  So for example, if I do:

                      static std::weak_ptr<std::array<uint64_t, 125000000>> weak;
                  
                      std::shared_ptr<std::array<uint64_t, 125000000>> strong = std::make_shared({0});
                      weak = std::weak_ptr(strong);
                  
                  That retains 1GiB of memory allocated without any ownership path due to implementation details of std::shared_ptr. Is that a memory leak? There’s no active thread that has a path and yet all of the memory is tracked - if you destroy the weak_ptr thee 1GiB of memory gets reclaimed.
                  • milesrout5 days ago
                    std shared_ptr uses reference counting, not automatic memory management (gc).
                    • vlovich1235 days ago
                      Reference counting is a form of GC / automatic memory management [1] but it’s ok, it’s a common mistake to make. What’s less ok is this absolute intransigence in persisting to believe that memory leaks aren’t possible in tracing GCs but only when playing the same definitional games you accuse Rust of doing by limiting the types of things you count as leaks. For example, if I implement a cache as Map<String, Object>, that’s a memory leak if you define memory leaks as retaining memory longer than you’d actually need if the goal is to have just a single instance of a value for key live (because it’s not using a weak reference) or forgetting to delete/evict from the cache. Bad software design can result in memory leaks and defining it as not a memory leak because a live reference to an object exists somewhere is just playing the definitions game [2]

                      [1] https://en.m.wikipedia.org/wiki/Garbage_collection_(computer...

                      [2] https://stackoverflow.com/questions/4987357/can-there-be-mem...

                      • milesrout5 days ago
                        You have misunderstood both the concept of a memory lwak and the concept of automatic memory management. Good job!

                        No, reference counting is not garbage collection. I am fully aware of the ridiculous claim that it is, promoted by people like you. I fundamentally disagree. It has none of the same properties and doesn't work anything like GC.

                        • cmrx645 days ago
                          https://dl.acm.org/doi/10.1145/1028976.1028982

                          It’s not a “ridiculous claim”, but maybe you think cycle collectors don’t count?

                        • vlovich1235 days ago
                          Multiple very talented and very knowledgeable people have tried to help you understand and these are people with firsthand knowledge of the discussion at hand (I’m not counting myself because Steve and the other know language design and Rust better than I do). You insist on doubling down on your position instead of considering the possibility you’re wrong. Not much more I can do here. You can only lead a horse to water.
                          • milesrout5 days ago
                            I consider whether I am wrong often. It happens to be that I am not. It is quite haughty and rude of you to assume that I haven't considered it here just because I disagree with you.

                            There isn't much more you can do here because you are completely wrong. Instead of facing reality (that Rust, useful as it may be, only prevents a narrow class of correctness issues of varying importance) you double down on its marketing spin that all the things it fixes just happen to be all the important safety-related ones.

                            Just step back and actually think. I implore you.

          • chlorion6 days ago
            The difference is that leaking is not UB, the worst case is an OOM situation, which at worst causes a crash, not a security exploit. Crashing is also considered to be safe in rust, panicking is common for example when something bad happens.
            • milesrout6 days ago
              Undefined behaviour is behaviour not defined by the language. So obviously Rust can define or undefine whatever it likes. It is not a sensible argument to say that something is safe because its behaviour is defined, or unsafe because it is undefined, when the whole point is that Rust's chosen definition of safety is just marketing.

              I admit a better example is race conditions.

              • kobebrookskC35 days ago
                no, undefined behavior not just behavior that is not covered by the language definition. undefined behavior is term of art largely taken from C/C++, basically meaning that correct programs are assumed not to have these behaviors. for example, see https://en.cppreference.com/w/c/language/behavior. the definition of ub is not "just marketing". many major security vulnerabilities stem from having ub (out of bounds access, use after free). the point of rust is pretty much that you have to try hard to have ub, whereas in c/c++, it's basically impossible to not have ub.
                • vlovich1235 days ago
                  To add onto this, Rust actually does have UB, it’s just impossible to reach without unsafe. One “sharper” edge has is that it’s UB is much easier to trigger in unsafe than one might expect, so writing unsafe Rust actually requires more skill than C++, which is why you should be very very careful when reaching for it.
          • vlovich1236 days ago
            Was Box::leak ever considered unsafe? std::mem::forget is very similar to that.

            Crashes, stability, and performance issues are still not safety issues since there’s so many ways to cause those beyond memory leaks. I don’t know the discussion that was ongoing in the community but I definitely appreciate them taking a pragmatic approach and cutting scope and going for something achievable.

            • milesrout6 days ago
              Box::leak was added two years later in November 2017 https://github.com/rust-lang/rust/commit/360ce780fdae0dcb31c...

              >Crashes, stability, and performance issues are still not safety issues since there’s so many ways to cause those beyond memory leaks.

              They aren't safety issues according to Rust's definition, but Rust's definition of "unsafe" is basically just "whatever Rust prevents". But that is just begging the question: they don't stop being serious safety issues just because Rust can't prevent them.

              If Rust said it dealt with most safety issues, or the most serious safety issues, or similar, that would be fine. Instead the situation is that they define data races as unsafe (because Rust prevents data races) but race conditions as safe (because Rust does not prevent them in general) even though obviously race conditions are a serious safety issue.

              For example you cannot get memory leaks in a language without mutation, and therefore without cyclic data structures. And in fact Rust has no cyclic data structures naturally, as far as I am aware: all cyclic data structures require some "unsafe" somewhere, even if it is inside RefCell/Rc in most cases. So truly safe Rust (Rust without any unsafe at all) is leakfree, I think?

              • Rusky6 days ago
                > Rust's definition of "unsafe" is basically just "whatever Rust prevents".

                It's not that circular.

                Rust defines data races as unsafe because they can lead to reads that produce corrupt values, outside the set of possibilities defined by their type. It defines memory leaks as safe because they cannot lead to this situation.

                That is the yardstick for what makes something safe or unsafe. It is the same yardstick used by other memory-safe languages- for instance, despite your claims to the contrary, garbage collectors do not and cannot guarantee a total lack of garbage. They have a lot of slack to let the garbage build up and then collect it all at once, or in some situations never collect it at all.

                There are plenty of undesirable behaviors that fall outside of this definition of unsafety. Memory leaks are simply one example.

                • milesrout5 days ago
                  >It defines memory leaks as safe because they cannot lead to this situation.

                  They can't now. They could up to and almost including 1.0. At that point the consensus was that memory leaks were unsafe and so unsafe code could rely on them not happening. That code was not incorrect! It just had assumptions that were false. One solution was to make those assumptions true by outlawing memory leaks. The original memory leak hack to trigger memory corruption was fairly fiendish in combination with scoped threads (IIRC).

                  >There are plenty of undesirable behaviors that fall outside of this definition of unsafety. Memory leaks are simply one example.

                  That is my whole point. It is a useless definition cherry-picked by Rust because it is what Rust, in theory, prevents. It does not precede Rust. Rust precedes it.

                  >It is the same yardstick used by other memory-safe languages- for instance, despite your claims to the contrary, garbage collectors do not and cannot guarantee a total lack of garbage. They have a lot of slack to let the garbage build up and then collect it all at once, or in some situations never collect it at all.

                  If it will eventually be collected then it isn't a memory leak.

                  Most actual safe languages don't let you write integer overflow.

                  • Rusky4 days ago
                    > They can't now. They could up to and almost including 1.0. At that point the consensus was that memory leaks were unsafe and so unsafe code could rely on them not happening. That code was not incorrect!

                    This is not how it worked, no. It was never memory leaks per se that led to unsoundness there. It was skipping destructors. You could have the exact same unsoundness if you freed the object without running the rest of its destructor first.

                    That part was the design choice Rust made- make destructors optional and change the scoped threads API, or make destructors required and keep the scoped threads API.

                    There is an underlying definition of memory safety (or more generally "soundness") that precedes Rust. It is of course defined in terms of a language's "abstract machine," but that doesn't mean Rust has complete freedom to declare any behavior as safe. Memory safety is a particular type of consistency within that abstract machine.

                    This is why the exact set of undesirable-but-safe operations varies between memory-safe languages. Data races are unsafe in Rust, but they are safe in Java, because Java's abstract machine is defined in such a way that data races cannot lead to values that don't match their types.

              • vlovich1235 days ago
                Sure, safety is a relative moving target. There’s no way to prevent race conditions unless you have proofs. And then there’s no way to enforce that your proof is written correctly. It’s turtles all the way down. Rust is a Paretto frontier of safety for AOT high performance languages. Even for race conditions I suspect the tools Rust has for managing concurrency-related issues make it less prone to such issues than other languages.

                The problem is you’re creating a hypothetical gold standard that doesn’t exist (indeed I believe it can’t exist) and then judging Rust on that faux standard and complaining that Rust chooses a different standard. That’s the thing though - every language can define whatever metrics they want and languages like C/C++ struggle to define any metrics that they win vs Rust.

                > For example you cannot get memory leaks in a language without mutation, and therefore without cyclic data structures

                This does not follow. Without any mutation of any kind, you can’t even allocate memory in the first place (how do you think a memory allocator works?). And you can totally get memory leaks without mutation however you narrowly define it because nothing prevents you from having a long-lived reference that you don’t release as soon as possible. That’s why memory leaks are still a thing in Java because there’s technically a live reference to the memory. No cycles or mutations needed.

                > So truly safe Rust (Rust without any unsafe at all) is leakfree, I think?

                Again, Box::leak is 100% safe and requires no unsafe at all. Same with std::mem::forget. But even if you exclude APIs like that that intentionally just forget about the value, again nothing stops you from retaining a reference forever in some global to keep it alive.

                • milesrout5 days ago
                  What is a type system except a bunch of proofs? You can encode some program correctness properties into types. Elevating the ones you happen to be able to encode and calling them "safety" and the rest "correctness" is just marketing.

                  I am not creating a gold standard because as far as I am concerned, it is all just correctness. There aren't morally more and less important correctness properties for general programs: different properties matter more or less for different programs.

                  >Without any mutation of any kind, you can’t even allocate memory in the first place (how do you think a memory allocator works?).

                      data L t = E | C t (L t)
                      data N = Z | S N
                  
                      nums Z = E
                      nums (S n) = C (S n) (nums n)
                  
                  You cannot express a reference cycle in a pure functional language but they still have allocation.

                  However I don't know why I brought this up, because you can also eliminate all memory leaks by just using garbage collection - you don't need to have immutable and acyclic data structures.

                  >Again, Box::leak is 100% safe and requires no unsafe at all. Same with std::mem::forget.

                      #[inline]
                      pub fn leak(b: Box<T>) -> &'static mut T {
                          unsafe { &mut *Box::into_raw(b) }
                      }
                  
                  They are implemented using unsafe. There is no way to implement Box without unsafe.

                  If you retain a reference in a global then it is NOT a memory leak! The variable is still accessible from the program. You can't just forget about the value: its name is right there, accessible. That is not a memory leak, except by complete abuse of terminology. The concept of "inaccessible and uncollectable memory, which cannot be used or reclaimed" is a useful one. Your definition of a memory leak seems to be... any memory usage at all?

                  • vlovich1235 days ago
                    the unsafety is because of the lifetime laundering not because the operation is unsafe. The compiler doesn’t know that the lifetime of the underlying memory because static and decoupled from the lifetime of the consumed Box

                    And while we’re at it, please explain to me how this hypothetical language that allocates on the heap without mutable state exists without under the hood calling out to the real mutable allocator somewhere.

                    > If you retain a reference in a global then it is NOT a memory leak!

                    > Your definition of a memory leak seems to be... any memory usage at all?

                    It’s just that you’re choosing to define it as not a memory leak. Another definition of memory leak might be “memory that is retained longer than it needs to be to accomplish the intended goal”. That’s because users are indifferent to whether the user code is retaining the reference and forgetting about it or the user code lost the reference and the language did too.

                    So from that perspective tracing GC systems even regularly leak memory and then go on a hunt trying to reclaim it when they’ve leaked too much.

                    More importantly as has been pointed out numerous times to you, memory safety is a technical term of art in the field (unlike memory leaks) that specifically is defined as the issues safe Rust prevents and memory leaks very clearly do not fall under that very specific definition.

                    • milesrout5 days ago
                      >the unsafety is because of the lifetime laundering not because the operation is unsafe.

                      You have missed the point. I said you can't leak memory in safe Rust. That is true. Box::leak isn't safe Rust: it uses the unsafe keyword. This is half the problem with the stupid keyword: it confuses people. I am saying that it requires the trustme keyword and you are saying it isn't inherently incorrect. Rust uses "unsafe" to mean both. But in context it is quite clear what I meant when talking about Box::leak, which you falsely claimed could be written in safe Rust.

                      >And while we’re at it, please explain to me how this hypothetical language that allocates on the heap without mutable state exists without under the hood calling out to the real mutable allocator somewhere.

                      What does the implementation have to do with anything? We are talking about languages not implementations. This isn't a difficult concept.

                      >It’s just that you’re choosing to define it as not a memory leak. Another definition of memory leak might be “memory that is retained longer than it needs to be to accomplish the intended goal”.

                      That isn't the definition. I am using the only definition of the term that any serious person has ever used.

                      >That’s because users are indifferent to whether the user code is retaining the reference and forgetting about it or the user code lost the reference and the language did too.

                      Users are completely irrelevant. It is logically impossible to ever prevent "leaks" that are just the storage of information. That isn't a leak, it is the intentional storage of information by the programmer. So it is a completely useless concept if that is what you want to use. It might be a useful concept in application user experience design or something but we are talking about programming languages.

                      On the other hand, "memory leaks" is a very useful concept if you use the actual definition because it is almost difficult to even conceive of a memory management strategy that isn't concerned with preventing memory leaks (proper). The "short lived program; free nothing" strategy is the only one I can think of, a degenerate case.

                      >More importantly as has been pointed out numerous times to you, memory safety is a technical term of art in the field (unlike memory leaks) that specifically is defined as the issues safe Rust prevents

                      No, it isn't! That is the definition that Rust people choose to use, which nobody used before 2015ish and is only widely used because Rust captured mindshare. It isn't some definition that predated Rust and which Rust magically fell right into.

                      Go back and look at mailing list threads, forum posts, papers, anything before Rust tried to steal the term "safety". It referred (and properly still refers) to programs. When people complained about manual memory management, the big complaint was that big C++ GUI programs (in particular) leaked memory like sieves. Nobody was particularly concerned about data races except the people implementing concurrency primitives in standard libraries etc. C++ didn't even have a defined memory model or standard atomics. Everyone was relying on x86's strong memory model in code all over the place. The big concern was avoiding manual memory management, memory leaks, and data corruption.

                      "Safe" didn't mean "has no data races but might have race conditions, has no use after free but might have memory leaks, and might have overflow bugs and SQL injections and improper HTML sanitisation". That would be a truly stupid definition. It meant "correct". The fanatical Rust community came along and tried to redefine "safe" to mean "the things we prevent". Rust's definition makes sense for Rust but it is Rust-specific because it is downstream of what Rust is capable of enforcing. Nobody would a priori come up with the particular subset of correctness properties that Rust happens to enforce and call them "safety". It is transparently a posteriori.

        • loeg7 days ago
          Yes, thanks, I read the article. Nevertheless, it's still a surprising footgun.
        • PartiallyTyped6 days ago
          Unsafe is concerned with unsafe blocks. NonZero::new_unchecked requires unsafe even though it’s not concerned with mem safety.
          • vlovich1236 days ago
            I believe the optimizer will do optimizations in response to the NonZero which can trigger UB if it does contain a 0, which is a traditional safety issue for Rust which can have no UB in safe code. But even the value being corrupt (ie NonZero returning 0) can cause memory safety issues. But yes, Rust also uses unsafe to bypass enforcing invariants, which std::mem::forget isn’t.
      • 0x4577 days ago
        What's unsafe about implicitly "leaking" memory?
        • FpUser7 days ago
          Running out of memory and killing the OS I would guess unless the OS kills misbehaving process first.
          • 0x4575 days ago
            So pretty much completely safe, then?
        • loeg7 days ago
          Destructors do more than just free memory.
          • 0x4575 days ago
            I've specifically said leaking memory? Then again, destructor that didn't run would be an application level error not "safety in rust" error.
            • loeg5 days ago
              You were responding to my comment, which had scope broader than just leaking memory. So, to suggest it is only about leaking memory is not really responsive.
  • johnisgood7 days ago
    Rust is so full of symbol soup.

      <'_>)
    
    is a very simple one, but there are ones with ~7 consecutive symbols, and there are a lot of symbols all over Rust code.

    How come it is in demand?

    Cool book though.

    • jeroenhd7 days ago
      I agree that Rust can look pretty weird to an untrained developer when lifetimes get involved. But, in Rust's defence, I haven't seen any other language write down lifetimes more concisely.

      The underscore could've been a name if the name mattered, which would be required in many languages. Rewriting it to <'something>) may help readability (but risks introducing bugs later by reusing `something`).

      Many C-derived languages are full of symbol soup. A group like <?,?>[]) can happen all over Java, for instance. Many of these languages have mixes of * and & all over the place, C++ has . and -> for some reason, making for some pretty unreadable soup. The biggest additions I think Rust added to the mix was ' for lifetimes (a concept missing from most languages, unfortunately), ! for a macro call (macro invocations in many other languages aren't marked at all, leaving the dev to figure out if println is a method or a macro), and ? to bubble up errors. The last one could've been a keyword (like try in Zig) but I'm not sure if it makes the code much more readable that way.

      If you know other programming languages, the symbols themselves fall into place quite quickly. I know what <'_> does in Rust for the same reason I know what <T, R> T does in Java, while a beginner or someone who hasn't learned past Java 6 may struggle to read the code. Out of all the hurdles a beginning Rust programmer will face, the symbols are probably your least concern.

      As for books, the Rust book on the Rust website is kept up to date pretty well. There are books for programmers coming from various other languages as well.

      The language itself hasn't changed much these past few years. The standard library gets extended with new features, but a book a few years old will teach you Rust just fine.

      In many cases, changes to the language have been things like "the compiler no longer treats this as broken (because it isn't)" and "the compiler no longer requires you to write out this long definition because it can figure that stuff out itself". I'd recommend running a tool called "clippy" in your IDE or on the command line, if you can leverage a modern language feature for better legibility, clippy will usually suggest it.

      • dietr1ch7 days ago
        > I agree that Rust can look pretty weird to an untrained developer when lifetimes get involved. But, in Rust's defence, I haven't seen any other language write down lifetimes more concisely.

        Can you do a lot better? I don't think so and it wouldn't help that much.

        The truth is that most of the time we want to rely on some inferred lifetime annotations, but will obviously need an escape hatch from time to time.

        Rust doesn't waste a lot of typing around the annotations, but if you were to improve Rust, you'd improve the implicit inference, not the syntax for being explicit.

        • flohofwoe6 days ago
          > Can you do a lot better? I don't think so and it wouldn't help that much.

          I think Rust could do a lot better inferring lifetimes if the compiler would be allowed to peek into called function instead of stopping at the function signature - e.g. if it had a complete picture of the control flow of the entire code base (maybe be up to a point that manual lifetime annotations could be completely eliminated?).

          IMHO it's not unrealistic to treat the entire codebase as a single compilation unit, Zig does this for instance - it just doesn't do much so far with the additional information that could be gained.

          • rcxdude6 days ago
            It's a dangerous option: Rust already has long compile times, expanding the space it has to analyze would only increase that. Not to mention you'd be much more dependent on the implementation details of a given function, and it'd become very messy. The fact that lifetimes have a specifiable interface is probably one of the main things that makes Rust's approach work at all.

            Rust has similar rules about type inference (of which lifetimes are a subset) at the function level as well. I think this was a lesson learned the hard way by Haskell, which does allow whole-program type inference, and how programmers working in it quickly learned you really want to specify the types at the function level anyway

            • flohofwoe6 days ago
              > Not to mention you'd be much more dependent on the implementation details of a given function

              Hmm, but wouldn't that already be the case since the manual lifetime annotation must match what the function actually does? E.g. I would expect compiler errors if the 'internal' lifetime details of a function no longer match its manual lifetime annotations (is it actually possible to create incorrect lifetime annoatations in Rust without the compiler noticing?)

              Higher compile times would be bad of course, but I wonder how much it would add in practice. It's a similar problem as LTO, just earlier in the compile process. E.g. maybe some time consuming tasks can be moved around instead of added on top.

              • steveklabnik6 days ago
                > is it actually possible to create incorrect lifetime annoatations in Rust without the compiler noticing?

                In safe rust, no.

                Full inference is one of those things that seems like a no brainer, but there are a number of other more subtle tradeoffs that make it a not great idea. Speed was already mentioned, but it’s really downstream from tractibility, IMHO. That is, lifetime checking is effectively instantaneous today, and that’s because you only need to confirm that the body matches the signature, which is a very small and local problem. Once you allow inference, you end up needing to check not just the body, but also the bodies of every function called in your body, recursively, since you no longer know their signatures up front. We tend to think of compiler passes as “speed” in the sense of it’s nice to have fast compile times, but it also matters in the sense of what can practically be checked in a reasonable time. The cheaper a check, the more checks we can do. Furthermore, remember that Rust supports separate compilation, which is a major hindrance to full program analysis, which is what you need to truly infer lifetimes.

                Beyond complexity arguments, there’s also more practical ones: error messages would get way worse. More valid programs would be rejected if the inference can’t figure out an answer. Semver is harder to maintain, because a change in the body now changes the signature, and you may break your callers in ways you don’t realize at first.

                • vlovich1235 days ago
                  I would kill for Rust to spend some time figuring out what the ownership rule should be when I get the ownership wrong - compile cycles are cheap and inexpensive compared to me sitting & trying different approaches or running an LLM to try to help me figure it out (hint: they largely fail miserably and cause me to waste more time). I was fighting one function in my codebase and couldn’t figure out how to get the compiler to be happy despite me seemingly having a correct definition, so I just broke down and won the impasse by using unsafe which isn’t what I wanted to do. I know it sometimes recommends, but not in all cases and not in this particular case.

                  Another thing I’ll point out is that TypeScript does full program inference and while type checking performance is a huge problem, it does a pretty good job. That obviously doesn’t necessarily map to Rust and the problem domain it’s solving (& maybe TS codebases naturally are smaller than Rust) but just putting that out there. Rust has made certain opinionated choices but that doesn’t mean that other choices weren’t equally valid and available. SemVer is easily solvable - don’t allow inference for pub APIs exported from the crate which also neatly largely solves the locality issue.

                  • steveklabnik5 days ago
                    Did you check your unsafe with Miri? It's possible you were trying to do something that isn't actually possible, locally speaking.

                    > I’ll point out is that TypeScript does full program inference

                    Do you have a citation for this? I don't believe this is the case, though I could be wrong. I actually spent some time trying to find a definitive answer here and couldn't. That said,

                    > Rust has made certain opinionated choices but that doesn’t mean that other choices weren’t equally valid and available.

                    This is true for sure; for example, TypeScript is deliberately unsound, and that's a great choice for it, but does not make sense for Rust.

                    > SemVer is easily solvable - don’t allow inference for pub APIs exported from the crate which also neatly largely solves the locality issue.

                    It helps with locality but doesn't solve it, as it's still a non-local analysis. The same problems fundamentally remain, even if the scope is a bit reduced.

                    • vlovich1235 days ago
                      Have you ever had success with Miri on non-trivial programs? Here's a reduced test case which does show it's safe under Miri but for the life of me I can't figure out how to get rid of the unsafe: https://play.rust-lang.org/?version=stable&mode=debug&editio...

                      > Do you have a citation for this? I don't believe this is the case, though I could be wrong. I actually spent some time trying to find a definitive answer here and couldn't. That said,

                      No and thinking about it more I'm not sure about the specific requirements that constitutes full program inference so it's possible it's not. However, I do know that it infers the return type signatures of functions from the bodies.

                      > This is true for sure; for example, TypeScript is deliberately unsound, and that's a great choice for it, but does not make sense for Rust

                      Sure but I think we can agree that the deliberately unsound is for ergonomic and pragmatic compatibility with JS, not because of the choice of inference.

                      I'm not arguing Rust should change it's inference strategy. Of all the things, I'd rate this quite low on my wishlist of "what would I change about how Rust works if I could wave a magic wand".

                      • cmrx645 days ago
                        Note that by creating the reference to a local and passing it up through the callback, you are using a fresh region that can’t possibly outlive any of one of the ones you are generic over. Fundamentally, that callback could stash the reference you pass it into state somewhere and now the pointer has escaped, invalidated as soon as that iteration of the loops ends.

                        See that the definition of Group is tying those together. Instead, you can split them apart and maybe use HRTB to ensure the closure _must_ be able to treat the lifetime as fresh? But then you’ll probably have other issues…

                        … which can largely be circumvented simply by pinning, in your reduced example, which probably doesn’t retain enough detail.

                        https://play.rust-lang.org/?version=stable&mode=debug&editio...

                        My suggestion to avoid the pain? Use ArcStr.

                        • vlovich1235 days ago
                          But why does pinning solve the issue? Fundamentally the lifetime of the future is unchanged as far as the compiler is concerned so in theory the callback should be capable of doing the same stashing, no?
                          • cmrx645 days ago
                            The lifetime _is_ changed; this lets you use the lifetime from the HRTB instead of the function generics. It’s not so much the pinning itself that does it, for the type system, but using the trait object enables referring to that HRTB to require true lifetime generic (and then pinning comes along for the ride).
                            • vlovich1235 days ago
                              Switched to 2024 and asyncfn completely removes the need for half the annotations :)
                      • steveklabnik5 days ago
                        > Have you ever had success with Miri on non-trivial programs?

                        The key is to isolate the unsafe code and test it directly, so you're not really doing it with whole programs. At least that's what I try to do. Anyway, was just curious!

                        (I don't have anything to say about the specific code here that cmr didn't already say)

                        > Sure but I think we can agree that the deliberately unsound is for ergonomic and pragmatic compatibility with JS,

                        Oh absolutely, all I meant was that because they're starting from different goals, they can make different choices.

      • dbdoskey7 days ago
        (not OP) I love rust, bu I just think that using ' for lifetime was a huge mistake, and using <> for templates (rather than something like []) was a medium mistake.

        There is something about how the brain is wired, that using ' for lifetime, just triggers the wrong immediate response to it.

        Something like this would look so much nicer IMHO [$_], compared to this <'_>.

        • Klonoar7 days ago
          I cannot imagine using syntax that’s largely reserved across languages for array indexing for such a completely unrelated topic.
        • mmoskal7 days ago
          This comes from ML (as in SML or OCaml), where 'a reads "alpha" and is a type parameter.
        • j-krieger6 days ago
          > using <> for templates (rather than something like []) was a medium mistake

          I never get this take. Array indexing is done with []. This would just confuse the hell out of me.

          • estebank6 days ago
            It has the benefit of making the parsing of numeric comparisons unambiguous and trivial to parse. You'd use a different syntax for array indexing.
        • dralley7 days ago
          I completely disagree that [$_] looks nicer than <'_>.
      • kazinator6 days ago
        > Many C-derived languages are full of symbol soup.

        C is not full of symbol soup though.

        It is more full of symbol soup than Pascal or Modula 2, and back in the day when C was taking over other such languages, there were lots of complaints about C's syntax being like "line noise" and whatnot.

        Rust takes it to a whole new level though.

        • johnisgood5 days ago
          Yeah, I wonder what people are referring to when they say C is full of symbol soup. I mentioned it in my other comment that C is not, just like how Common Lisp is not due to the parentheses, its syntax is pretty simple.
      • colonial6 days ago
        re: try keyword in Rust - this is actually a thing on nightly, although instead of bubbling up errors directly, it creates a scope (within which ? is usable) that evaluates to a Result.
    • stouset7 days ago
      Symbols are just other letters in the alphabet. Something like <‘_> is as natural for me to read at this point as any of the other words in this sentence.

      Math is also symbol soup. But those symbols mean things and they’ve usually been designed to compose nicely. Mathematicians using symbols—just like writers using alphabets—are able to use those symbols to concisely and precisely convey complicated concepts to one another.

      I guess my point is that symbols shouldn’t be looked at as inherently a positive or negative thing. Are they clear and unambiguous in their use? Do they map clearly onto coherent concepts? When you need to compose them, is it straightforward and obvious to do so?

      • jcelerier7 days ago
        > Math is also symbol soup. But those symbols mean things and they’ve usually been designed to compose nicely. Mathematicians using symbols—just like writers using alphabets—are able to use those symbols to concisely and precisely convey complicated concepts to one another.

        I just don't understand why one may take maths of all things as a positive example of something readable, when it's widely known to be utterly inscrutable to most humans on earth and even so many papers have differing conventions, using the same symbol for sometimes widely different or sometimes barely different things

        • pharrington7 days ago
          Literally every language is inscrutable to most humans on Earth. But they all work fairly well for those in the club that know them!
      • Etheryte7 days ago
        I think many programming languages could benefit if we had an easy way to have both custom symbols and a convenient way to input them without extra friction. Take APL for example, once you know the language it's incredibly expressive, but the overhead to typing it is so strong that many use custom keyboards/caps.
        • jonahx7 days ago
          Uiua (https://www.uiua.org/), broadly in the APL lineage, solves this problem nicely.

          Like APL, it has a set of well-chosen symbols, but each symbol has an english name you can type just as you would a function name in another language, and it's automatically converted to the symbol when you run it.

        • ufo7 days ago
          I wish compose keys were more prevalent. There's something nice about typing -> and getting →
        • bombela7 days ago
          To be fair the basic ASCII keyboard is also default in US/Britain. And most people assume that's all they get.

          I have always used the "international" version of the US English keyboard on Linux.

          And I can enter all common symbols pressing altgr or altgr-shift. I also use right Ctrl as a compose key fore more. I would be hard pressed remembering what combo to press, after years it's just muscle memory.

          But how do you find out what layout and what compose key does what? Good luck. It's as documented as gesture and hidden menus on iOS and MacOS. sigh.

      • johnisgood7 days ago
        Well, I wish I could find the ones I have seen in the wild.

        Perhaps HRTBs and Fn traits, or double turbofish generics. I really cannot remember sadly.

        • stouset7 days ago
          Even something like foo::<‘_, [T]>() is just not that hard to follow. Again, the symbols involved all compose nicely and are unambiguous. And frankly, you just don’t need something like that all that often (and when you do, there are usually other alternatives if you’re really put off by the symbols.
          • johnisgood7 days ago
            Someone mentioned the use of ")?)?" (in terms of error handling), I am quite put off by this, too. :P

            Anyways, I will try to look for the code, it is somewhere in my comment history but I have left way too many comments, so no promises.

            • stouset7 days ago
              I would one million percent rather type (and read)

                  foo(bar()?)?
              
              over something like

                  if a, err := bar() { 
                      return nil, err
                  }
              
                  if b, err := foo() { 
                      return nil, err
                  }
              
              But also even better is just

                  let a = bar()?;
                  let b = foo()?;
              • johnisgood7 days ago
                I prefer ("if a, err := bar() {"), the same things you said applies here, too. I write a lot of Go and I can glance through it quickly, there is no cognitive overhead for me.

                Edit: actually, it was someone else who said this: "Human brain has a funny way of learning how to turn off the noise and focus on what really matters.".

                • stouset7 days ago
                  The difference is, there is no room for bugs with ?. Zero. None.

                  I have fixed (and frankly, caused) many bugs in golang code where people’s brains “turned off the noise” and filtered out the copypasta’d error handling, which overwrote the wrong variable name or didn’t actually bubble up the error or actually had subtly wrong logic in the conditional part which was obscured by the noise.

                  Frankly, learning to ignore that 80% of your code is redundant noise feels to me like a symptom of Stockholm syndrome more than anything else.

                  One symbol to replace three lines of identical boilerplate is no less explicit and dramatically clearer.

                  • jicea6 days ago
                    It's even nicer in Rust: there can be an "implicit" conversion between the error raised by foo and bar:

                      fn foo() -> Result<(),FooError>
                          bar()?
                      
                      fn bar() -> Result<(),BarError>
                    
                    If FooError can be created from BarError, the compiler will insert the conversion call and errors bubbles up nicely.
                  • johnisgood6 days ago
                    Is it not caught by the compiler or linter though? Even variable shadowing is caught.
                    • stouset6 days ago
                      Some things are. Some things aren’t. At one point, you could write

                          nil, err 
                      
                      without the return and it would happily compile. It’s also tragically easy for actual logic bugs to be obscured by all the boilerplate.

                      It’s not like three lines of error handling copypasta is some optimal amount. If golang required ten lines of boilerplate error handling, you’d still have just as many people arguing in favor of it because they “like it to be explicit” when it reality it’s verbose and the real underlying argument is that it’s what they’ve grown accustomed to. `?` is no less explicit, but it is less unnecessarily verbose.

                      • johnisgood5 days ago
                        What I can say here is that I am not one of these people who would argue in favor of Go's error handling were it 10 lines. shudders. I would definitely not use it, just like how I do not use Java for quite many reasons (unless I get paid for it, but would rather not). :P
              • 7 days ago
                undefined
    • dcminter7 days ago
      > How come is it in demand?

      It's a curly-brace language with some solid decisions (e.g. default immutability) that produces static binaries and without a need for a virtual machine, while making some guarantees that eliminate a swathe of possible bug types at compile time.

      As others note, the symbol soup is something you learn to read fluently and isn't worth getting hung up on.

      Basically it occupies something of a sweet spot in the power/useability/safety space and got a decent PR shove by coming out of Mozilla back when they were the cool kids. I like it a lot. YMMV.

      • colonial6 days ago
        "Curly-brace language" is a good way to put it. Rust does an excellent job of cribbing features that aren't mainstream and giving them a more intuitive name and design.

        Most people will conk out if you start talking about how your language has "algebraic data types." But if you rephrase that as "we let you put payloads in your enum," well, that piques people's interest. It certainly worked on me.

      • n_u5 days ago
        > It is a curly-brace language

        What does this mean? Not Python?

        • dcminter5 days ago
          A language that demarks code blocks with curly braces {} instead of whitespace, begin/end keywords, round brackets (), etc.

          They often share other syntax similarities but not one particular common set across all of them.

    • airstrike7 days ago
      <'_> is one of the most basic symbols in Rust. Reading that is almost like reading the letter 'a after just some very modest amount of time with the language.

      > How come is it in demand?

      Because there's a lot more to the language than just those not-really-unfamiliar symbols

    • gauge_field7 days ago
      Rust's design is designed to be more in the mentality of if it compiles that it is good enough, leaving less for runtime issues to occur unexpected, dictated by type and memory safety. So, it requires more type info (unless you use unidiomatic unsafe code) and talking with borrow checker. But, once you internalize its type system and borrow checker, it pays off if you care about compiler driven development (instead of dealing with errors in runtime).
    • dhruvrajvanshi7 days ago
      > How come it is in demand?

      Because it's a complicated language for building extremely low level things, when you have no other choice. IMO it's not the right tool for high level stuff (even though it does have some stuff which higher level languages should probably borrow).

      The only other language that directly competes with Rust IMO is C++, which is equally full of symbol soup.

      • worik7 days ago
        > IMO it's not the right tool for high level stuff

        I thought that for a long time. But as time passes and I spend more time in languages like Typescript (Semi-Type Script more accurately) and Swift the more I yearn for Rust.

        It is not the right tool for scripting, true.

        • dhruvrajvanshi6 days ago
          I think most server type software can trade off the borrow checker for a GC while still benefiting from other Rust stuff.

          IMO there's still need for a higher level Rust where you don't need that last 20% of the performance and control.

          Some people say that OCaml is the high level rust, but I think it's got a lot of gaps which rust doesn't.

          • colonial6 days ago
            Where OCaml lost me was the packaging and building story. Dear Lord am I spoilt by Cargo.

            Nice language otherwise.

            • johnisgood5 days ago
              Are you referring to dune (building) and/or opam?
            • dhruvrajvanshi6 days ago
              Yeah absolutely. Cargo is one of the highlights of Rust. Completely no nonsense.
        • gauge_field7 days ago
          Yeah I feel that, not the entire language but, many of its choices, like error handling, sum types (with exhaustive enum matching) especially when writing in python.
          • dhruvrajvanshi6 days ago
            Yeah this is the stuff I meant when I said high level languages should borrow from Rust.

            It's also good to remind people that these things were borrowed by Rust from other languages too. Primarily the ML family of languages.

      • timeon7 days ago
        I find it fine for high-level stuff as well. I never get complaining about syntax (in any language).
        • dhruvrajvanshi6 days ago
          Thats your opinion and I respect it. Especially the bit about complaining about syntax. There's no ther language directly competing with rust which had less syntax.

          My opinion is that in Rust you have to make decisions on certain things which are take n for you by the garbage collector in other languages.

          Should you store a reference or value in your struct? You can't just change it without modifying other places. I understand that this gives you the control to get the final 20% of performance in certain places but it's still lower level than other languages.

          You could say just spam Arc everywhere and forget about references, but that itself is a low level decision that you make.

        • johnisgood6 days ago
          Syntax matters though, just like how some people do not like Lisp because of its parentheses.
    • satvikpendem7 days ago
      Now try BQN (Advent of Code 2020 Problem 2):

          Split ←((⊢-˜+`׬)∘=⊔⊢)                                               
           input2←' 'Split¨•file.Lines "../2020/2.txt"  # change string to your file location
          
          Day2←{                                                               
             f←⊑{(⊑)+↕1+|-´}‿{-1}  # Select the [I]ndex generator [F]unction
             I←{F •BQN¨  '-' Split ⊑}  # [I]ndices used to determine if the     
             C←{⊑1⊑}                   # [C]haracter appears in the             
             P←{⊑⌽}                    # [P]assword either                      
             Part1←(I∊˜·+´C=  P)¨        # a given number of times                
             Part2←(1= ·+´C=I⊏P)¨        # or at one of a pair of indices         
             ⊑+´◶Part1‿Part2                                                        
          }
          
          •Show { Day2 input2}¨↕2
    • kshri247 days ago
      Lifetimes and annotations only look like symbol soup initially (when you have little to no experience in Rust). The more proficient you become in Rust more you end up ignoring it completely. Sort of like ads you see (or don't) in Search. Human brain has a funny way of learning how to turn off the noise and focus on what really matters.
    • j-krieger6 days ago
      Initial Rust development set out to avoid symbol soup. With some switches in leadership, this was forgotten.
      • johnisgood6 days ago
        Thanks, this is interesting to know.
        • steveklabnik6 days ago
          Fwiw I don’t think your parent is lying but I also don’t feel it’s really accurate. If you read https://graydon2.dreamwidth.org/307291.html for example, there’s some references that imply this, but it’s not really that “less symbols” was a goal so much as it is a secondary effect of other choices. Graydon wanted a simpler language and that implies simpler syntax, not the other way around. Even the grammar bit isn't really about "symbol soup."

          Early Rust had other sorts of things that a lot of folks would consider readability problems unrelated to symbols too: no keyword was allowed to be over five characters, so return was ret, continue was cont, etc.

          • j-krieger5 days ago
            It‘s not a lie, it‘s a loose quote of Steve Klabnik when I asked him on Reddit
            • steveklabnik5 days ago
              Ha! Well maybe my opinion has changed over time. To be honest, I struggle to call Rust “symbol soup” now or then; other than lifetimes, which is just one symbol, I don’t think Rust is a particularly symbol heavy language, or at least, not much more than any other curly brace and semicolon language.
              • johnisgood5 days ago
                Well, if you can have (and projects do have) >5 consecutive symbols, then it is symbol heavy. I am pretty sure I made this comment a long time ago with an example but paging on HN is dreadful and time-consuming. I will try to look for it. It was on GitHub. I came across it when I was interested in Rust and checked somewhat popular Rust codebases.

                Can you think of such cases?

                • steveklabnik5 days ago
                  I think it also depends on how you think of symbols; I see "::" as a single operator, not two symbols. Do () and <> count as individual symbols? I believe you do, given that you have an example upthread.

                  If those are the case, well, I can construct something, but it's not something I've used directly. Four isn't unheard of if you're going by those rules, but five is a bit extra.

                  You cited )?)?; a little while ago, I personally would write this code like the final example over here: https://news.ycombinator.com/item?id=43234284

                  • johnisgood5 days ago
                    Yes, I consider "::" as two symbols, also yeah I am against ")?)?" but I have seen "worse" in the wild. I think I will have to look for what I saw before we continue. I might not be able to reply to this comment, however.
    • booleandilemma7 days ago
      What is this constant push in software development to dumb everything down to the lowest common denominator?

      I don't see it in other fields, at all.

      • codr77 days ago
        I have to say I have no trouble seeing it absolutely everywhere.
      • the_gastropod7 days ago
        Off the bat: I like Rust. I'm still very much a novice with it, but I enjoy it.

        But almost the entirety of Computer Science is based on abstractions because they're helpful to "dumb down" some details that aren't super-important for our day-to-day work. E.g., writing TCP protocols directly is Assembly would be too fine-grained detail for most people's usual work, and using some existing abstraction is "good enough" virtually all of the time (even though we might be able to optimize things for our use-cases if we did drop down to that level)

        There exists programming work where fiddling with lifetimes is just too fiddly to be worthwhile (e.g., web development, probably is more than fine using a good ol' garbage collected language). This isn't about "dumbing down" anything, it's about refocusing on what's important for the job you're doing.

      • johnisgood6 days ago
        I am against dumbing things down, too (although I do not see its relevance to my comment), but for example I have no issues with OCaml, C, Factor, Ada, Common Lisp, etc. It is just a personal preference anyways.
      • worik7 days ago
        > dumb everything down to the lowest common denominator

        What do you mean?

    • DistractionRect7 days ago
      It's more batteries included and the packaging ecosystem story is better than alternatives. Certain safety guarantees are a nice to have.

      If you just want a better c/c++ afaik that's zig, but I have no experience with it

      • worik7 days ago
        > the packaging ecosystem story

        I love Rust, I am a devotee and an advocate.

        But the packaging system, more importantly the lack of a comprehensive system crate, is one of the greatest weaknesses of Rust.

        A simple programme can pull in hundreds of crates from goodness knows where and by Dog knows who, for all sorts of uncertainties.

        There are work arounds, but they eat up time that could be used far more productively

      • no_wizard7 days ago
        An aside question I have is what’s the best beginner Rust book out there that is up to date?

        I been learning Rust off and on and I have a more serious need to get up to speed with it but I’m unsure where it’s best to start in this way

        • kshri247 days ago
          In this order:

          1. The Rust Book (Free) - https://doc.rust-lang.org/book/

          2. Rust by Example (Free) - https://doc.rust-lang.org/rust-by-example/

          3. Rust Atomics and Locks - https://marabos.nl/atomics/

          4. Rust in Action - https://www.rustinaction.com/

          5. Rust for Rustaceans - https://rust-for-rustaceans.com/

          Also Jon Gjengset's channel is immensely valuable: https://www.youtube.com/c/JonGjengset

          • akkad337 days ago
            What do you think about Rust for Rustaceans? I read it and there are very niche and useful information there about Rust that I didn't see anywhere. It's a solid book but for a book about programming there are so few real code examples that it can come off dry. I just bought Rust atomic and locks and it seems exercise based, so I'm excited to finish it. The first chapter seems promising
            • timeon7 days ago
              As title implies, Rust for Rustaceans is not for those that are just starting with the language.
              • akkad335 days ago
                My gripe is not about it being not beginner friendly, but it not having many code examples for a programming Book. One doesn't preclude the other imo
            • kshri247 days ago
              You are right about it not being a beginner friendly book. Hence why I placed it lower in the order of books to study.

              Yeah Rust atomics and locks is essential if you truly want to understand low-level concurrency. But you might have to also refer to the C++ std::atomics reference [1] to get a complete idea. It took me a while to grasp those concepts.

              [1]: https://en.cppreference.com/w/cpp/atomic/atomic

          • crablearner7 days ago
            I have a hard copy of Programming Rust by Jim Blandy et al would that slot in nicely anywhere here?
            • thesuperbigfrog7 days ago
              "Programming Rust" by Jim Blandy et al was the book that really helped me to understand why many of the design decisions behind the implementation of Rust were made.

              I found it more approachable than some of the other Rust books and highly recommend it as a first Rust book.

            • abenga6 days ago
              Programming Rust is the best beginner Rust programming book in my opinion, followed by the official book. It has more detail and better examples.
            • kshri247 days ago
              Unfortunately, I haven't read Programming Rust. The list includes just the books I used to learn Rust. But will definitely give Blandy's book a read. Thanks for the recommendation!
        • smodo7 days ago
          The Rust Programming Language does a great job imho. It got me up to speed by reading it before bed for a month. I’d never written C/C++ before, just a lot of Python. It starts out really simply by explaining the type system and the borrow checker. Take it from there and do a couple of side projects, I’d say.
        • airstrike7 days ago
          Write an `iced` app, is my suggestion. You'll learn some of the best of what Rust has to offer
      • codr77 days ago
        C/C++ are two very different languages.

        Zig seems to follow the C tradition, and Rust C++.

        • akkad337 days ago
          Why do people say Rust follows the tradition of C++? Rust follows very different design decisions than C++ like a different approach to backwards compatibility, it does not tack on one feature on top of another, it is memory safe etc that are very different from C++. If you are just comparing the size of language, there are other complex languages out there like D, Ada etc
          • worik7 days ago
            > Why do people say Rust follows the tradition of C++?

            They mean the domain that Rust is in.

            Before Rust there was only C or C++ for real time programming. C++ was an experiment (wildly successful IMO when I left it in 2001) trying to address the shortcomings of C. It turned out that too much of everything was in C++, long compile times, a manual several inches thick, huge executables. Some experiments turned out not to be a good idea (exceptions, multiple inheritance, inheritance from concrete classes....)

            Rust is a successor in that sense. It draws on the lessons of C++ and functional programming.

            I hope I live long enough to see the next language in this sequence that learns form the mistakes of Rust (there are a few, and it will take some more years to find them all)

            • johnisgood6 days ago
              Some of C++'s warts are still available in Rust, though, such as long compile times. Additionally it encourages using a lot of dependencies, too, just like npm does.

              Anyways, I dislike C++, it is too bloated and I would rather just use C.

            • pjmlp6 days ago
              It was no experiment at all, it was Bjarne Stroustroup way to never ever repeat his downgrade experience from Simula to BCPL, after he started working at Bell Labs and was originally going to have to write a distributed systems infrastructure in C.

              Also there have been alternatives to C and C++, even if they tend to be ignored by most folks.

              • worik6 days ago
                Bjarne Stroustroup describes it as experimental. At least he used to back when I cared a lot
                • pjmlp5 days ago
                  I am quite sure that isn't the story as described on either "Design and Evolution of C++", or "C++ ARM", as owner of those books.
          • flohofwoe6 days ago
            The one big (and IMHO most problematic) thing that Rust and C++ have in common is the desire to implement important core features via the stdlib instead of new language syntax. Also both C++ and Rust use RAII for 'garbage collection' and the 'zero-cost-abstraction promise' is the same, with the same downsides (low debug-mode runtime performance and high release-mode build times).
            • steveklabnik6 days ago
              While I don’t disagree that there’s a similar desire regarding libraries vs syntax, Rust is also more willing to make things first class language features if there’s a benefit. Enums vs std::variant, for example.
              • codr74 days ago
                And it's a balance act, both approaches to language design have merit.

                That being said, I can't work with std::variant, and God knows I tried to like it. Rust's enums look a lot nicer by comparison, haven't had enough experience to run into potential rough edges which I'm sure are there.

          • majoe6 days ago
            For me the defining feature of C++ are its move semantics. It permeates every corner of your C++ code and affects every decision you make as a C++ developer.

            Rust's defining feature is its borrow checker, which solves a similar problem as move semantics, but is more powerful and has saner defaults.

      • KerrAvon7 days ago
        Zig is not yet stable enough to base a long-term project around, unless something's changed very recently.

        If you really only want a better C/C++, use C++ and amp up your use of safer types (or consider D).

        • flohofwoe6 days ago
          Zig doesn't promise language or stdlib stability yet, but in reality the changes are quite manageable. And it's already good enough for some high-profile real-world projects like Bun (https://bun.sh/), Tigerbeetle (https://tigerbeetle.com/) or Ghostty (https://ghostty.org/).

          In the end, language stability isn't as important as it used to be, people are quite used to fixing their code when upgrading dependencies to a new major version for instance.

          • pjmlp6 days ago
            It remains to be seen if any of those projects will be around in a couple of years.

            I haven't yet seen something that would make me have to consider Zig, regardless of my personal opinion, like other languages that have grown to become unavoidable.

    • BrouteMinou7 days ago
      You become used reading this. Typing it is such a pain; I mean real pain like muscle pains.

      I developed some muscles I didn't know I had.

    • thrance7 days ago
      Wait til you learn about APL...

      Seriously though, I immediately parse it as "generic bounds containing the erased lifetime, close parenthesis". It's not a big deal.

      And of all the critics one might have on Rust (or any other programming language), "too much symbols" appear like a weak one.

      • johnisgood5 days ago
        Well, if someone does not like Common Lisp due to the parentheses, is it a weak _preference_?
    • hardwaregeek7 days ago
      Rust's syntax isn't gonna win any awards, but it looks sufficiently like C++ to hide that Rust is essentially an ML variant with linear types.
    • LtdJorge7 days ago
      It was designed to be used with syntax highlighting and LSPs. The highlighting makes it pretty easy to read for me. Although there are some arcane generics with lifetimes that can be indecipherable in some libraries.
    • 7 days ago
      undefined
    • ramon1566 days ago
      Its not required. There's a high chance you can avoid writing explicit lifetimes, it's just another tool for you to use
    • indulona7 days ago
      [dead]
  • 6 days ago
    undefined
  • thurn7 days ago
    Kind of surprised that this book could be published by O'Reilly and also freely available online? Seems unusually generous.
    • darthrupert7 days ago
      Possibly a sign of confidence. After browsing this for a few minutes, I'm very convinced of its quality and will probably buy it.

      Wouldn't have happened with a book with just sample pages.

      • jjallen7 days ago
        Why buy it if it’s completely free which is implied by your post?
        • darthrupert7 days ago
          Because I have written a book and thus know how much work it is to write even a mediocre one.

          Also as a way to increase my motivation to read it.

          Plus I have money. This book costs about as much as a good bottle of wine or a bad bottle of whiskey.

          • dcminter7 days ago
            > Plus I have money. This book costs about as much as a good bottle of wine or a bad bottle of whiskey.

            Exactly.

            A few years ago I did a really aggressive weeding out of my bookshelves as things were getting far too cluttered. In the process I threw out what must have been - at cover price - several thousand pounds worth of IT related books.

            On the resale market they were all too stale to have any value (though I did manage to give a handful away to friends). In one way it was a bit painful, but those few thousand pounds worth of books has given me a huge (financial) return on that investment!

            Cheap at the cost of a good bottle of wine ... for the foundations of a career!

          • tialaramex6 days ago
            > a good bottle of wine or a bad bottle of whiskey.

            I don't enjoy either but I have friends who decided to specialise and so I'm confident that you can easily reverse this split if you have decided you care more about one or the other.

        • kshri247 days ago
          To support the author. And as a way of saying thank you.
        • WD-427 days ago
          The last 2 books I’ve bought (ostep and nand2tetris) are available online. Hard copies are nice and personally seeing it on my desk gives more more motivation to finish them.
        • dcminter7 days ago
          Because we all know what happens if we're not the customer.

          I have this; I bought it because I want to reward the author for producing a quality work, and because I want to encourage the publishers to produce other works that would appeal to me.

          I also happen to like physical texts so I bought the paperback but I have this and the digital edition. The latter is convenient for when I am travelling and appropriately formatted for an eReader (not just the raw html from these pages).

        • vaylian7 days ago
          Because the people want to show appreciation for the good work the author has done?
        • codr77 days ago
          True for digital copies, I've never yet bought one of those.

          I have no trouble paying for physical books though.

        • smodo7 days ago
          The book isn’t free, its contents are published online by the author. Yes, nitpicking. But (1) I like a well formatted epub and (2) the author/publisher still hold copyright.
        • akkad337 days ago
          I want to read on Kindle or own the book.