Effective Rust - https://news.ycombinator.com/item?id=38241974 - Nov 2023 (10 comments)
Effective Rust (2021) - https://news.ycombinator.com/item?id=36338529 - June 2023 (204 comments)
Edit: I've put 2024 in the title above because that's what the page currently says. But what's the most accurate year for this material?
- You can't export a reference to the thing you are dropping. You can do that in C++. This prevents "re-animation", where something destroyed comes back to life or is accessed beyond death. Microsoft Managed C++ (early 2000s), supported re-animation and gave it workable semantics. Bad idea, now dead.
- This is part of why Rust destructors cannot run more than once. Less than once is possible, as mentioned above.
- There's an obscure situation with Arc and destructors. When an Arc counts down to 0, the destructor is run. Exactly once. However, Arc countdown and destructor running are not an atomic operation. It is possible for two threads to see an Arc in a strong_count == 1 state just before the Arc counts down. Never check strong_count to see if you are "the last owner". That creates a race condition.[1] I've seen that twice now. I found race conditions that took a day of running to hit. Use strong_count only for debug print.
- A pattern that comes up in GUI libraries and game programming involves objects that are both in some kind of index and owned by Arcs. On drop, the object should be removed from the index. This is a touchy operation. The index should use weak refs, and you have to be prepared to get an un-upgradable Weak from the index.
- Even worse is the case where dropping an object starts a deletion of something else. If the second deletion can't be completed from within the destructor, perhaps because it requires a network transaction, it's very easy to introduce race conditions.
>
> - This is part of why Rust destructors cannot run more than once. ...
This is a very backwards way to describe this, I think. Managed C++ only supported re-animation for garbage collected objects, where it is still today a fairly normal thing for a language to support. This is why these "destructors" typically go by a different name, "finalizers." Some languages allow finalizers to run more than once, even concurrently, but this is again due to their GC design and not a natural thing to expect of a "destructor."
The design of Drop and unmanaged C++ destructors is that they are (by default) deterministically executed before the object is deallocated. Often this deallocation is not by `delete` or `free`, which could perhaps in principle be cancelled, but by a function return popping a stack frame, or some larger object being freed, which it simply does not make sense to cancel.
This made me think of the `im` library[0] which provides some immutable/copy on write collections. The docs make it seem like they do some optimizations when they determine there is only one owner:
> Most crucially, if you never clone the data structure, the data inside it is also never cloned, and in this case it acts just like a mutable data structure, with minimal performance differences (but still non-zero, as we still have to check for shared nodes).
I hope this isn't prone to a similar race condition!
> In the past mem::forget was marked as unsafe as a sort of lint against using it, since failing to call a destructor is generally not a well-behaved thing to do (though useful for some special unsafe code). However this was generally determined to be an untenable stance to take: there are many ways to fail to call a destructor in safe code. The most famous example is creating a cycle of reference-counted pointers using interior mutability.
Rust initially advertised itself as preventing leaks, which makes sense as it is supposed to have the power of automatic memory management but without the runtime overhead.
Unfortunately, shortly before Rust's release it was discovered that there were some APIs that could cause memory corruption in the presence of memory leaks. The decision was made that memory leaks would be too complicated to fix before 1.0: it would have had to have been delayed. So the API in question was taken out and Rust people quietly memory-holed the idea that leak freedom has ever been considered part of memory safety.
If "is leaking memory safe?" is an issue of contention for you, I'd suggest that it's a good idea to do some reading on what memory safety is (I mean that in all sincerity, not as a dunk). Memory safety, at least by the specific and highly useful definition used by compiler developers, is intimately entangled with undefined behaviour, but memory leaking sits entirely outside this sphere. This is as true in C and C++ as it is in Rust.
It's not as if Rust invented the term "memory safety" or gets to define it.
Memory leaks are situations where memory is unrecovered despite there being no path to it from any active thread.
Talking about leaks this way is absolutely normal. Take https://stackoverflow.com/questions/6470651/how-can-i-create... for tons of examples.
static std::weak_ptr<std::array<uint64_t, 125000000>> weak;
std::shared_ptr<std::array<uint64_t, 125000000>> strong = std::make_shared({0});
weak = std::weak_ptr(strong);
That retains 1GiB of memory allocated without any ownership path due to implementation details of std::shared_ptr. Is that a memory leak? There’s no active thread that has a path and yet all of the memory is tracked - if you destroy the weak_ptr thee 1GiB of memory gets reclaimed.[1] https://en.m.wikipedia.org/wiki/Garbage_collection_(computer...
[2] https://stackoverflow.com/questions/4987357/can-there-be-mem...
No, reference counting is not garbage collection. I am fully aware of the ridiculous claim that it is, promoted by people like you. I fundamentally disagree. It has none of the same properties and doesn't work anything like GC.
It’s not a “ridiculous claim”, but maybe you think cycle collectors don’t count?
There isn't much more you can do here because you are completely wrong. Instead of facing reality (that Rust, useful as it may be, only prevents a narrow class of correctness issues of varying importance) you double down on its marketing spin that all the things it fixes just happen to be all the important safety-related ones.
Just step back and actually think. I implore you.
I admit a better example is race conditions.
Crashes, stability, and performance issues are still not safety issues since there’s so many ways to cause those beyond memory leaks. I don’t know the discussion that was ongoing in the community but I definitely appreciate them taking a pragmatic approach and cutting scope and going for something achievable.
>Crashes, stability, and performance issues are still not safety issues since there’s so many ways to cause those beyond memory leaks.
They aren't safety issues according to Rust's definition, but Rust's definition of "unsafe" is basically just "whatever Rust prevents". But that is just begging the question: they don't stop being serious safety issues just because Rust can't prevent them.
If Rust said it dealt with most safety issues, or the most serious safety issues, or similar, that would be fine. Instead the situation is that they define data races as unsafe (because Rust prevents data races) but race conditions as safe (because Rust does not prevent them in general) even though obviously race conditions are a serious safety issue.
For example you cannot get memory leaks in a language without mutation, and therefore without cyclic data structures. And in fact Rust has no cyclic data structures naturally, as far as I am aware: all cyclic data structures require some "unsafe" somewhere, even if it is inside RefCell/Rc in most cases. So truly safe Rust (Rust without any unsafe at all) is leakfree, I think?
It's not that circular.
Rust defines data races as unsafe because they can lead to reads that produce corrupt values, outside the set of possibilities defined by their type. It defines memory leaks as safe because they cannot lead to this situation.
That is the yardstick for what makes something safe or unsafe. It is the same yardstick used by other memory-safe languages- for instance, despite your claims to the contrary, garbage collectors do not and cannot guarantee a total lack of garbage. They have a lot of slack to let the garbage build up and then collect it all at once, or in some situations never collect it at all.
There are plenty of undesirable behaviors that fall outside of this definition of unsafety. Memory leaks are simply one example.
They can't now. They could up to and almost including 1.0. At that point the consensus was that memory leaks were unsafe and so unsafe code could rely on them not happening. That code was not incorrect! It just had assumptions that were false. One solution was to make those assumptions true by outlawing memory leaks. The original memory leak hack to trigger memory corruption was fairly fiendish in combination with scoped threads (IIRC).
>There are plenty of undesirable behaviors that fall outside of this definition of unsafety. Memory leaks are simply one example.
That is my whole point. It is a useless definition cherry-picked by Rust because it is what Rust, in theory, prevents. It does not precede Rust. Rust precedes it.
>It is the same yardstick used by other memory-safe languages- for instance, despite your claims to the contrary, garbage collectors do not and cannot guarantee a total lack of garbage. They have a lot of slack to let the garbage build up and then collect it all at once, or in some situations never collect it at all.
If it will eventually be collected then it isn't a memory leak.
Most actual safe languages don't let you write integer overflow.
This is not how it worked, no. It was never memory leaks per se that led to unsoundness there. It was skipping destructors. You could have the exact same unsoundness if you freed the object without running the rest of its destructor first.
That part was the design choice Rust made- make destructors optional and change the scoped threads API, or make destructors required and keep the scoped threads API.
There is an underlying definition of memory safety (or more generally "soundness") that precedes Rust. It is of course defined in terms of a language's "abstract machine," but that doesn't mean Rust has complete freedom to declare any behavior as safe. Memory safety is a particular type of consistency within that abstract machine.
This is why the exact set of undesirable-but-safe operations varies between memory-safe languages. Data races are unsafe in Rust, but they are safe in Java, because Java's abstract machine is defined in such a way that data races cannot lead to values that don't match their types.
The problem is you’re creating a hypothetical gold standard that doesn’t exist (indeed I believe it can’t exist) and then judging Rust on that faux standard and complaining that Rust chooses a different standard. That’s the thing though - every language can define whatever metrics they want and languages like C/C++ struggle to define any metrics that they win vs Rust.
> For example you cannot get memory leaks in a language without mutation, and therefore without cyclic data structures
This does not follow. Without any mutation of any kind, you can’t even allocate memory in the first place (how do you think a memory allocator works?). And you can totally get memory leaks without mutation however you narrowly define it because nothing prevents you from having a long-lived reference that you don’t release as soon as possible. That’s why memory leaks are still a thing in Java because there’s technically a live reference to the memory. No cycles or mutations needed.
> So truly safe Rust (Rust without any unsafe at all) is leakfree, I think?
Again, Box::leak is 100% safe and requires no unsafe at all. Same with std::mem::forget. But even if you exclude APIs like that that intentionally just forget about the value, again nothing stops you from retaining a reference forever in some global to keep it alive.
I am not creating a gold standard because as far as I am concerned, it is all just correctness. There aren't morally more and less important correctness properties for general programs: different properties matter more or less for different programs.
>Without any mutation of any kind, you can’t even allocate memory in the first place (how do you think a memory allocator works?).
data L t = E | C t (L t)
data N = Z | S N
nums Z = E
nums (S n) = C (S n) (nums n)
You cannot express a reference cycle in a pure functional language but they still have allocation.However I don't know why I brought this up, because you can also eliminate all memory leaks by just using garbage collection - you don't need to have immutable and acyclic data structures.
>Again, Box::leak is 100% safe and requires no unsafe at all. Same with std::mem::forget.
#[inline]
pub fn leak(b: Box<T>) -> &'static mut T {
unsafe { &mut *Box::into_raw(b) }
}
They are implemented using unsafe. There is no way to implement Box without unsafe.If you retain a reference in a global then it is NOT a memory leak! The variable is still accessible from the program. You can't just forget about the value: its name is right there, accessible. That is not a memory leak, except by complete abuse of terminology. The concept of "inaccessible and uncollectable memory, which cannot be used or reclaimed" is a useful one. Your definition of a memory leak seems to be... any memory usage at all?
And while we’re at it, please explain to me how this hypothetical language that allocates on the heap without mutable state exists without under the hood calling out to the real mutable allocator somewhere.
> If you retain a reference in a global then it is NOT a memory leak!
> Your definition of a memory leak seems to be... any memory usage at all?
It’s just that you’re choosing to define it as not a memory leak. Another definition of memory leak might be “memory that is retained longer than it needs to be to accomplish the intended goal”. That’s because users are indifferent to whether the user code is retaining the reference and forgetting about it or the user code lost the reference and the language did too.
So from that perspective tracing GC systems even regularly leak memory and then go on a hunt trying to reclaim it when they’ve leaked too much.
More importantly as has been pointed out numerous times to you, memory safety is a technical term of art in the field (unlike memory leaks) that specifically is defined as the issues safe Rust prevents and memory leaks very clearly do not fall under that very specific definition.
You have missed the point. I said you can't leak memory in safe Rust. That is true. Box::leak isn't safe Rust: it uses the unsafe keyword. This is half the problem with the stupid keyword: it confuses people. I am saying that it requires the trustme keyword and you are saying it isn't inherently incorrect. Rust uses "unsafe" to mean both. But in context it is quite clear what I meant when talking about Box::leak, which you falsely claimed could be written in safe Rust.
>And while we’re at it, please explain to me how this hypothetical language that allocates on the heap without mutable state exists without under the hood calling out to the real mutable allocator somewhere.
What does the implementation have to do with anything? We are talking about languages not implementations. This isn't a difficult concept.
>It’s just that you’re choosing to define it as not a memory leak. Another definition of memory leak might be “memory that is retained longer than it needs to be to accomplish the intended goal”.
That isn't the definition. I am using the only definition of the term that any serious person has ever used.
>That’s because users are indifferent to whether the user code is retaining the reference and forgetting about it or the user code lost the reference and the language did too.
Users are completely irrelevant. It is logically impossible to ever prevent "leaks" that are just the storage of information. That isn't a leak, it is the intentional storage of information by the programmer. So it is a completely useless concept if that is what you want to use. It might be a useful concept in application user experience design or something but we are talking about programming languages.
On the other hand, "memory leaks" is a very useful concept if you use the actual definition because it is almost difficult to even conceive of a memory management strategy that isn't concerned with preventing memory leaks (proper). The "short lived program; free nothing" strategy is the only one I can think of, a degenerate case.
>More importantly as has been pointed out numerous times to you, memory safety is a technical term of art in the field (unlike memory leaks) that specifically is defined as the issues safe Rust prevents
No, it isn't! That is the definition that Rust people choose to use, which nobody used before 2015ish and is only widely used because Rust captured mindshare. It isn't some definition that predated Rust and which Rust magically fell right into.
Go back and look at mailing list threads, forum posts, papers, anything before Rust tried to steal the term "safety". It referred (and properly still refers) to programs. When people complained about manual memory management, the big complaint was that big C++ GUI programs (in particular) leaked memory like sieves. Nobody was particularly concerned about data races except the people implementing concurrency primitives in standard libraries etc. C++ didn't even have a defined memory model or standard atomics. Everyone was relying on x86's strong memory model in code all over the place. The big concern was avoiding manual memory management, memory leaks, and data corruption.
"Safe" didn't mean "has no data races but might have race conditions, has no use after free but might have memory leaks, and might have overflow bugs and SQL injections and improper HTML sanitisation". That would be a truly stupid definition. It meant "correct". The fanatical Rust community came along and tried to redefine "safe" to mean "the things we prevent". Rust's definition makes sense for Rust but it is Rust-specific because it is downstream of what Rust is capable of enforcing. Nobody would a priori come up with the particular subset of correctness properties that Rust happens to enforce and call them "safety". It is transparently a posteriori.
<'_>)
is a very simple one, but there are ones with ~7 consecutive symbols, and there are a lot of symbols all over Rust code.How come it is in demand?
Cool book though.
The underscore could've been a name if the name mattered, which would be required in many languages. Rewriting it to <'something>) may help readability (but risks introducing bugs later by reusing `something`).
Many C-derived languages are full of symbol soup. A group like <?,?>[]) can happen all over Java, for instance. Many of these languages have mixes of * and & all over the place, C++ has . and -> for some reason, making for some pretty unreadable soup. The biggest additions I think Rust added to the mix was ' for lifetimes (a concept missing from most languages, unfortunately), ! for a macro call (macro invocations in many other languages aren't marked at all, leaving the dev to figure out if println is a method or a macro), and ? to bubble up errors. The last one could've been a keyword (like try in Zig) but I'm not sure if it makes the code much more readable that way.
If you know other programming languages, the symbols themselves fall into place quite quickly. I know what <'_> does in Rust for the same reason I know what <T, R> T does in Java, while a beginner or someone who hasn't learned past Java 6 may struggle to read the code. Out of all the hurdles a beginning Rust programmer will face, the symbols are probably your least concern.
As for books, the Rust book on the Rust website is kept up to date pretty well. There are books for programmers coming from various other languages as well.
The language itself hasn't changed much these past few years. The standard library gets extended with new features, but a book a few years old will teach you Rust just fine.
In many cases, changes to the language have been things like "the compiler no longer treats this as broken (because it isn't)" and "the compiler no longer requires you to write out this long definition because it can figure that stuff out itself". I'd recommend running a tool called "clippy" in your IDE or on the command line, if you can leverage a modern language feature for better legibility, clippy will usually suggest it.
Can you do a lot better? I don't think so and it wouldn't help that much.
The truth is that most of the time we want to rely on some inferred lifetime annotations, but will obviously need an escape hatch from time to time.
Rust doesn't waste a lot of typing around the annotations, but if you were to improve Rust, you'd improve the implicit inference, not the syntax for being explicit.
I think Rust could do a lot better inferring lifetimes if the compiler would be allowed to peek into called function instead of stopping at the function signature - e.g. if it had a complete picture of the control flow of the entire code base (maybe be up to a point that manual lifetime annotations could be completely eliminated?).
IMHO it's not unrealistic to treat the entire codebase as a single compilation unit, Zig does this for instance - it just doesn't do much so far with the additional information that could be gained.
Rust has similar rules about type inference (of which lifetimes are a subset) at the function level as well. I think this was a lesson learned the hard way by Haskell, which does allow whole-program type inference, and how programmers working in it quickly learned you really want to specify the types at the function level anyway
Hmm, but wouldn't that already be the case since the manual lifetime annotation must match what the function actually does? E.g. I would expect compiler errors if the 'internal' lifetime details of a function no longer match its manual lifetime annotations (is it actually possible to create incorrect lifetime annoatations in Rust without the compiler noticing?)
Higher compile times would be bad of course, but I wonder how much it would add in practice. It's a similar problem as LTO, just earlier in the compile process. E.g. maybe some time consuming tasks can be moved around instead of added on top.
In safe rust, no.
Full inference is one of those things that seems like a no brainer, but there are a number of other more subtle tradeoffs that make it a not great idea. Speed was already mentioned, but it’s really downstream from tractibility, IMHO. That is, lifetime checking is effectively instantaneous today, and that’s because you only need to confirm that the body matches the signature, which is a very small and local problem. Once you allow inference, you end up needing to check not just the body, but also the bodies of every function called in your body, recursively, since you no longer know their signatures up front. We tend to think of compiler passes as “speed” in the sense of it’s nice to have fast compile times, but it also matters in the sense of what can practically be checked in a reasonable time. The cheaper a check, the more checks we can do. Furthermore, remember that Rust supports separate compilation, which is a major hindrance to full program analysis, which is what you need to truly infer lifetimes.
Beyond complexity arguments, there’s also more practical ones: error messages would get way worse. More valid programs would be rejected if the inference can’t figure out an answer. Semver is harder to maintain, because a change in the body now changes the signature, and you may break your callers in ways you don’t realize at first.
Another thing I’ll point out is that TypeScript does full program inference and while type checking performance is a huge problem, it does a pretty good job. That obviously doesn’t necessarily map to Rust and the problem domain it’s solving (& maybe TS codebases naturally are smaller than Rust) but just putting that out there. Rust has made certain opinionated choices but that doesn’t mean that other choices weren’t equally valid and available. SemVer is easily solvable - don’t allow inference for pub APIs exported from the crate which also neatly largely solves the locality issue.
> I’ll point out is that TypeScript does full program inference
Do you have a citation for this? I don't believe this is the case, though I could be wrong. I actually spent some time trying to find a definitive answer here and couldn't. That said,
> Rust has made certain opinionated choices but that doesn’t mean that other choices weren’t equally valid and available.
This is true for sure; for example, TypeScript is deliberately unsound, and that's a great choice for it, but does not make sense for Rust.
> SemVer is easily solvable - don’t allow inference for pub APIs exported from the crate which also neatly largely solves the locality issue.
It helps with locality but doesn't solve it, as it's still a non-local analysis. The same problems fundamentally remain, even if the scope is a bit reduced.
> Do you have a citation for this? I don't believe this is the case, though I could be wrong. I actually spent some time trying to find a definitive answer here and couldn't. That said,
No and thinking about it more I'm not sure about the specific requirements that constitutes full program inference so it's possible it's not. However, I do know that it infers the return type signatures of functions from the bodies.
> This is true for sure; for example, TypeScript is deliberately unsound, and that's a great choice for it, but does not make sense for Rust
Sure but I think we can agree that the deliberately unsound is for ergonomic and pragmatic compatibility with JS, not because of the choice of inference.
I'm not arguing Rust should change it's inference strategy. Of all the things, I'd rate this quite low on my wishlist of "what would I change about how Rust works if I could wave a magic wand".
See that the definition of Group is tying those together. Instead, you can split them apart and maybe use HRTB to ensure the closure _must_ be able to treat the lifetime as fresh? But then you’ll probably have other issues…
… which can largely be circumvented simply by pinning, in your reduced example, which probably doesn’t retain enough detail.
https://play.rust-lang.org/?version=stable&mode=debug&editio...
My suggestion to avoid the pain? Use ArcStr.
The key is to isolate the unsafe code and test it directly, so you're not really doing it with whole programs. At least that's what I try to do. Anyway, was just curious!
(I don't have anything to say about the specific code here that cmr didn't already say)
> Sure but I think we can agree that the deliberately unsound is for ergonomic and pragmatic compatibility with JS,
Oh absolutely, all I meant was that because they're starting from different goals, they can make different choices.
There is something about how the brain is wired, that using ' for lifetime, just triggers the wrong immediate response to it.
Something like this would look so much nicer IMHO [$_], compared to this <'_>.
I never get this take. Array indexing is done with []. This would just confuse the hell out of me.
C is not full of symbol soup though.
It is more full of symbol soup than Pascal or Modula 2, and back in the day when C was taking over other such languages, there were lots of complaints about C's syntax being like "line noise" and whatnot.
Rust takes it to a whole new level though.
Math is also symbol soup. But those symbols mean things and they’ve usually been designed to compose nicely. Mathematicians using symbols—just like writers using alphabets—are able to use those symbols to concisely and precisely convey complicated concepts to one another.
I guess my point is that symbols shouldn’t be looked at as inherently a positive or negative thing. Are they clear and unambiguous in their use? Do they map clearly onto coherent concepts? When you need to compose them, is it straightforward and obvious to do so?
I just don't understand why one may take maths of all things as a positive example of something readable, when it's widely known to be utterly inscrutable to most humans on earth and even so many papers have differing conventions, using the same symbol for sometimes widely different or sometimes barely different things
Like APL, it has a set of well-chosen symbols, but each symbol has an english name you can type just as you would a function name in another language, and it's automatically converted to the symbol when you run it.
I have always used the "international" version of the US English keyboard on Linux.
And I can enter all common symbols pressing altgr or altgr-shift. I also use right Ctrl as a compose key fore more. I would be hard pressed remembering what combo to press, after years it's just muscle memory.
But how do you find out what layout and what compose key does what? Good luck. It's as documented as gesture and hidden menus on iOS and MacOS. sigh.
Perhaps HRTBs and Fn traits, or double turbofish generics. I really cannot remember sadly.
Anyways, I will try to look for the code, it is somewhere in my comment history but I have left way too many comments, so no promises.
foo(bar()?)?
over something like if a, err := bar() {
return nil, err
}
if b, err := foo() {
return nil, err
}
But also even better is just let a = bar()?;
let b = foo()?;
Edit: actually, it was someone else who said this: "Human brain has a funny way of learning how to turn off the noise and focus on what really matters.".
I have fixed (and frankly, caused) many bugs in golang code where people’s brains “turned off the noise” and filtered out the copypasta’d error handling, which overwrote the wrong variable name or didn’t actually bubble up the error or actually had subtly wrong logic in the conditional part which was obscured by the noise.
Frankly, learning to ignore that 80% of your code is redundant noise feels to me like a symptom of Stockholm syndrome more than anything else.
One symbol to replace three lines of identical boilerplate is no less explicit and dramatically clearer.
fn foo() -> Result<(),FooError>
bar()?
fn bar() -> Result<(),BarError>
If FooError can be created from BarError, the compiler will insert the conversion call and errors bubbles up nicely. nil, err
without the return and it would happily compile. It’s also tragically easy for actual logic bugs to be obscured by all the boilerplate.It’s not like three lines of error handling copypasta is some optimal amount. If golang required ten lines of boilerplate error handling, you’d still have just as many people arguing in favor of it because they “like it to be explicit” when it reality it’s verbose and the real underlying argument is that it’s what they’ve grown accustomed to. `?` is no less explicit, but it is less unnecessarily verbose.
It's a curly-brace language with some solid decisions (e.g. default immutability) that produces static binaries and without a need for a virtual machine, while making some guarantees that eliminate a swathe of possible bug types at compile time.
As others note, the symbol soup is something you learn to read fluently and isn't worth getting hung up on.
Basically it occupies something of a sweet spot in the power/useability/safety space and got a decent PR shove by coming out of Mozilla back when they were the cool kids. I like it a lot. YMMV.
Most people will conk out if you start talking about how your language has "algebraic data types." But if you rephrase that as "we let you put payloads in your enum," well, that piques people's interest. It certainly worked on me.
What does this mean? Not Python?
They often share other syntax similarities but not one particular common set across all of them.
> How come is it in demand?
Because there's a lot more to the language than just those not-really-unfamiliar symbols
Because it's a complicated language for building extremely low level things, when you have no other choice. IMO it's not the right tool for high level stuff (even though it does have some stuff which higher level languages should probably borrow).
The only other language that directly competes with Rust IMO is C++, which is equally full of symbol soup.
I thought that for a long time. But as time passes and I spend more time in languages like Typescript (Semi-Type Script more accurately) and Swift the more I yearn for Rust.
It is not the right tool for scripting, true.
IMO there's still need for a higher level Rust where you don't need that last 20% of the performance and control.
Some people say that OCaml is the high level rust, but I think it's got a lot of gaps which rust doesn't.
Nice language otherwise.
It's also good to remind people that these things were borrowed by Rust from other languages too. Primarily the ML family of languages.
My opinion is that in Rust you have to make decisions on certain things which are take n for you by the garbage collector in other languages.
Should you store a reference or value in your struct? You can't just change it without modifying other places. I understand that this gives you the control to get the final 20% of performance in certain places but it's still lower level than other languages.
You could say just spam Arc everywhere and forget about references, but that itself is a low level decision that you make.
Split ←((⊢-˜+`׬)∘=⊔⊢)
input2←' 'Split¨•file.Lines "../2020/2.txt" # change string to your file location
Day2←{
f←⊑{(⊑)+↕1+|-´}‿{-1} # Select the [I]ndex generator [F]unction
I←{F •BQN¨ '-' Split ⊑} # [I]ndices used to determine if the
C←{⊑1⊑} # [C]haracter appears in the
P←{⊑⌽} # [P]assword either
Part1←(I∊˜·+´C= P)¨ # a given number of times
Part2←(1= ·+´C=I⊏P)¨ # or at one of a pair of indices
⊑+´◶Part1‿Part2
}
•Show { Day2 input2}¨↕2
Early Rust had other sorts of things that a lot of folks would consider readability problems unrelated to symbols too: no keyword was allowed to be over five characters, so return was ret, continue was cont, etc.
Can you think of such cases?
If those are the case, well, I can construct something, but it's not something I've used directly. Four isn't unheard of if you're going by those rules, but five is a bit extra.
You cited )?)?; a little while ago, I personally would write this code like the final example over here: https://news.ycombinator.com/item?id=43234284
I don't see it in other fields, at all.
But almost the entirety of Computer Science is based on abstractions because they're helpful to "dumb down" some details that aren't super-important for our day-to-day work. E.g., writing TCP protocols directly is Assembly would be too fine-grained detail for most people's usual work, and using some existing abstraction is "good enough" virtually all of the time (even though we might be able to optimize things for our use-cases if we did drop down to that level)
There exists programming work where fiddling with lifetimes is just too fiddly to be worthwhile (e.g., web development, probably is more than fine using a good ol' garbage collected language). This isn't about "dumbing down" anything, it's about refocusing on what's important for the job you're doing.
If you just want a better c/c++ afaik that's zig, but I have no experience with it
I love Rust, I am a devotee and an advocate.
But the packaging system, more importantly the lack of a comprehensive system crate, is one of the greatest weaknesses of Rust.
A simple programme can pull in hundreds of crates from goodness knows where and by Dog knows who, for all sorts of uncertainties.
There are work arounds, but they eat up time that could be used far more productively
I been learning Rust off and on and I have a more serious need to get up to speed with it but I’m unsure where it’s best to start in this way
1. The Rust Book (Free) - https://doc.rust-lang.org/book/
2. Rust by Example (Free) - https://doc.rust-lang.org/rust-by-example/
3. Rust Atomics and Locks - https://marabos.nl/atomics/
4. Rust in Action - https://www.rustinaction.com/
5. Rust for Rustaceans - https://rust-for-rustaceans.com/
Also Jon Gjengset's channel is immensely valuable: https://www.youtube.com/c/JonGjengset
Yeah Rust atomics and locks is essential if you truly want to understand low-level concurrency. But you might have to also refer to the C++ std::atomics reference [1] to get a complete idea. It took me a while to grasp those concepts.
I found it more approachable than some of the other Rust books and highly recommend it as a first Rust book.
Zig seems to follow the C tradition, and Rust C++.
They mean the domain that Rust is in.
Before Rust there was only C or C++ for real time programming. C++ was an experiment (wildly successful IMO when I left it in 2001) trying to address the shortcomings of C. It turned out that too much of everything was in C++, long compile times, a manual several inches thick, huge executables. Some experiments turned out not to be a good idea (exceptions, multiple inheritance, inheritance from concrete classes....)
Rust is a successor in that sense. It draws on the lessons of C++ and functional programming.
I hope I live long enough to see the next language in this sequence that learns form the mistakes of Rust (there are a few, and it will take some more years to find them all)
Anyways, I dislike C++, it is too bloated and I would rather just use C.
Also there have been alternatives to C and C++, even if they tend to be ignored by most folks.
That being said, I can't work with std::variant, and God knows I tried to like it. Rust's enums look a lot nicer by comparison, haven't had enough experience to run into potential rough edges which I'm sure are there.
Rust's defining feature is its borrow checker, which solves a similar problem as move semantics, but is more powerful and has saner defaults.
If you really only want a better C/C++, use C++ and amp up your use of safer types (or consider D).
In the end, language stability isn't as important as it used to be, people are quite used to fixing their code when upgrading dependencies to a new major version for instance.
I haven't yet seen something that would make me have to consider Zig, regardless of my personal opinion, like other languages that have grown to become unavoidable.
I developed some muscles I didn't know I had.
Seriously though, I immediately parse it as "generic bounds containing the erased lifetime, close parenthesis". It's not a big deal.
And of all the critics one might have on Rust (or any other programming language), "too much symbols" appear like a weak one.
Wouldn't have happened with a book with just sample pages.
Also as a way to increase my motivation to read it.
Plus I have money. This book costs about as much as a good bottle of wine or a bad bottle of whiskey.
Exactly.
A few years ago I did a really aggressive weeding out of my bookshelves as things were getting far too cluttered. In the process I threw out what must have been - at cover price - several thousand pounds worth of IT related books.
On the resale market they were all too stale to have any value (though I did manage to give a handful away to friends). In one way it was a bit painful, but those few thousand pounds worth of books has given me a huge (financial) return on that investment!
Cheap at the cost of a good bottle of wine ... for the foundations of a career!
I don't enjoy either but I have friends who decided to specialise and so I'm confident that you can easily reverse this split if you have decided you care more about one or the other.
I have this; I bought it because I want to reward the author for producing a quality work, and because I want to encourage the publishers to produce other works that would appeal to me.
I also happen to like physical texts so I bought the paperback but I have this and the digital edition. The latter is convenient for when I am travelling and appropriately formatted for an eReader (not just the raw html from these pages).
I have no trouble paying for physical books though.