But this is wrong, it is no longer the case that `Unique` implies any extra optimizations, since it was since recognized that having these optimizations on a low-level unsafe building block is actually undesirable. Further, the `as_ref` functions mentioned in the article are not used by `Vec`'s internals. They are anyways only marked unsafe because they create references, not because of anything special with `Unique`.
Nowadays, all that `Unique` does is interact with `dropck` in a quirky way, the "exception to the exception." It is quite complicated and I always have to read up on it. It is explained in the Rustonomocon, in the "Click here to see why" section of this chapter: https://doc.rust-lang.org/nomicon/phantom-data.html#an-excep...
Thank you for your critique and you are right, I struggled with this chapter a lot.
I will read the link to get a better understanding of this type and update the section.
If I may ask, do you have more sources where I can read up on the history and current uses of unique besides the rustonomicon?
Thank you again for your engagement! It is very helpful to get these insights from people with more knowledge.
The one I particularly want to call out because it came up this morning is providing both Vec::reserve and Vec::reserve_exact
Vec::reserve lets us hint about our upcoming capacity expectations without damaging the O(1) amortized growth which is the whole point of this collection type, but it can waste some memory to achieve this. This is probably what you want in most code.
Vec::reserve_exact is a more narrow idea - we can hint about the ultimate capacity needed, if we're wrong and later need more capacity this has a significant performance cost because we thew away the amortized growth promise to get this, but we don't waste memory.
C++ only provides the equivalent of Vec::reserve_exact which makes this a footgun but if you use this call when you really needed the other one you trash your perf, as a result of which people teaching C++ tend to just say don't use reservation to uh, reserve capacity, but now you're leaving perf on the table so that's not great.
The claim that using reserve_exact "throws away the amortized growth promise" is wrong. You don't disable amortized growth, you just won't get extra headroom from that call. If you guessed too small, you may pay an extra reallocation later. Rust's Vec still provides amortized O(1) push overall.
> C++ only provides the equivalent of Vec::reserve_exact which makes this a footgun but if you use this call when you really needed the other one you trash your perf, as a result of which people teaching C++ tend to just say don't use reservation to uh, reserve capacity, but now you're leaving perf on the table so that's not great.
It's not a reserve_exact only footgun. The standard says reserve(n) makes capacity at least n, implementations may allocate exactly n ( and many do), but they're allowed to overallocate and future growth remains amortized. Used properly (when you know or can bound the size), reserve is a common and recommended optimization.
Well, in some sense this depends on what exactly you think you're amortizing over. If you `Vec::reserve` with size N, fill to N, and then append a single element you get the usual amortized O(1) growth of an append (or at least you can, the docs for `Vec::reserve` say it may reserve additional space, not that it must). But if you `Vec::reserve_exact` with size N, fill to N, and then append a single element you are guaranteeing that that first append triggers a potentially O(N) resize.
From that point on in either case you would get the usual amortized growth of course.
The documentation does not guarantee this, because memory allocators can't typically allocate arbitrarily-sized blocks of memory, instead rounding to the nearest supported size. For example, for small allocations glibc malloc allocates in multiples of 8 bytes, with a minimum size of 24 bytes. So if you make a 35-byte allocation, there will be 5 bytes of wasted space at the end which you could theoretically use to store more elements without reallocating if your collection grows.
If you're using the system allocator, Rust can't take advantage of this, because the C malloc/free APIs don't provide any (portable) way for the allocator to inform the application about this excess capacity. But Rust's (currently unstable) API for custom allocators does expose this information, and the documentation is written to allow Vec to take advantage of this information if available. If you reserve_exact space for 35 u8's, and your allocator rounds to 40 bytes (and informs the Vec of this the allocator API), then the vector is allowed to set its capacity to 40, meaning the next append would not trigger a resize.
On current stable Rust, this is all just theoretical and Vec works as you describe -- but the documentation specifically does not promise this because the situation is expected to change in the future.
Now, try to frame a house with jeweler's hammer, tapping in tiny increments and resizing every few boards and you will crawl into quadratic time.
Who is using jeweler's hammer to frame their house?
So what I'm arguing is if you don't misuse reserve_exact, then as you said you still get amortized growth. The OP's example misuses the tool and then blames it for not behaving differently on that first append.
The key part is that multiple calls reserve_exact will cause repeated allocations. The classic example is if someone defines a `pushfive` function that uses reserve_exact to increase the size by 5, and then pushes 5 times. Calling this function in a loop will take quadratic time since each reserve_exact call increases the array size. With reserve, the runtime is linear as expected.
Don't pre reserve inside pushfive, just push the 5 elements (Vec handles growth) or if you know how many times you'll call pushfive in a loop, reserve up front once vec.reserve(5 * times) or use reserve instead of reserve_exact for incremental growth.
Yes, reserve is the safer default because it gives you slack and preserves amortized growth even if your usage pattern is messy.
But "needs global analysis" isn't unique to reserve_exact. Lots of perf knobs do (chunk sizes, buffering, locking granularity etc). The fix to this is to use the tool where its preconditions actually hold, not to avoid the tool.
So what I'm basically saying is that reserve_exact isn't inherently dangerous, it just assumes you know the final size and won't exceed it etc. If you keep bumping it in tiny steps (pushfive style), that's just misuse so treating it as a flaw is unwarranted.
Dangerous is perhaps overstating it, but the only utility reserve(_exact) has is performance and predictability. If using the API can be worse than doing nothing, I think it warrants being referred to as a footgun.
vec.insert(
vec.cend(),
std::move_iterator(newElems.begin()),
std::move_iterator(newElems.end())
);
You would need a very shitty implementation to achieve anything close to what you are describing(perhaps only allocating in powers of 2), and even then it would recover fairly quickly.
`reserve` and `reserve_exact` are used when mutating an existing vec. What you provide is not the total wanted capacity but the additional wanted capacity.
`reserve` allows to avoid intermediate allocation.
Let's say that you have a vec with 50 items already and plan to run a loop to add 100 more (so 150 in total). The initial internal capacity is most likely 64, if you just do regular `push` calls without anything else, there will be two reallocations: one from 64 to 128 and one from 128 to 256.
If you call `reserve(100)`, you'll be able to skip the intermediate 64 to 128 reallocation: it will do a single reallocation from 64 to 256 and it will be able to handle the 100 pushes without any reallocation.
If you call `reserve_exact(100)`, you'll get a single reallocation for from 64 to 150 capacity, and also guarantee no reallocation during the processing loop.
The difference is that `reserve_exact` is better if these 100 items were the last ones you intended to push as you get a full vec of capacity 150 and containing 150 items. However, if you intend to push more items later, maybe 100 more, then you'd need to reallocate and break the amortized cost guarantees. With `reserve`, you don't break the amortized cost if there are follow-up inserts; at the price of not being at 100% usage all the time. In the `reserve` case, the capacity of 256 would be enough and let you go from 150 to 250 items without any reallocation.
In short, a rule of thumb could be:
- If creating a vec and you know the total count, prefer `Vec::with_capacity`
- If appending a final chunk of items and then no longer adding items, prefer `Vec::reserve_exact`
- If appending a chunk of items which may not be final, prefer `Vec::reserve`
Regarding the official documentation, I've returned to read them. I agree that the docs would benefit from more discussion about when to use each method. In particular, the code examples are currently exactly the same which is not great. Still, the most critical piece of information is there [0]
> Prefer `reserve` if future insertions are expected.
If anyone wants to reuse my explanation above, feel free to do it; no need to credit.
[0]: https://doc.rust-lang.org/std/vec/struct.Vec.html#method.res...
docs: https://doc.rust-lang.org/std/vec/struct.Vec.html#method.res...
After some pondering, and reading the rust documentation, I came to the conclusion that te difference is this: reserve() will grow the underlaying memory area to the next increment, or more than one increment, while reserve_exact() will only grow the underlaying memory area to the next increment, but no more than that.
Eg, if grow strategy is powers of two, and we are at pow(2), then reserve() may skip from pow(2) to pow(4), but reserve_exact() would be constrained to pow(3) as the next increment.
Or so i read the documentation. Hopefully someone can confirm?
The C++ one, however, will not reserve more than you ask for (in the case that you reserve greater than the current capacity). It's an exact reservation in the rust sense.
> reserve() will grow the underlaying memory area to the next increment, or more than one increment, while reserve_exact() will only grow the underlaying memory area to the next increment, but no more than that
No, not quite. Reserve will request as many increments as it needs, and reserve_exact will request the exact total capacity it needs.
Where the docs get confusing, is that the allocator also has a say here. In either case, if you ask for 21 items, and the allocator decides it prefers to give you a full page of memory that can contain, say, 32 items... then the Vec will use all the capacity returned by the allocator.
Of course, this can change in the future; in particular, the entire allocator API is still unstable and likely won't stabilize any time soon.
We could, as that comment suggests, check the slice and see if there's enough room for more than our chosen capacity. We could also debug_assert that it's not less room, 'cos the Allocator promised it would be big enough. I dunno if that's worthwhile.
says
> Increase the capacity of the vector (the total number of elements that the vector can hold without requiring reallocation) to a value that's greater or equal to new_cap.
I belive that the behaviour of reserve() is implementation defined.
If you don't offer Vec::reserve_exact then people who needed that run out of RAM and will dub your stdlib garbage. If you don't offer Vec::reserve as we've seen C++ programmers will say "Skill issue" whenever a noob gets awful performance as a result. So, it's an easy choice.
It would be nice if this were true but AFAIK the memory allocator interface is busted - Rust inherits the malloc-style from C/C++ which doesn’t permit the allocator to tell the application “you asked for 128 bytes but I gave you an allocation for 256”. The alloc method just returns a naked u8 pointer.
But the (not yet stable) Allocator::allocate returns Result<NonNull<[u8]>, AllocError> --- that is, either a slice of bytes OR a failure.
Vec actually relies on Allocator not GlobalAlloc (it's part of the standard library so it's allowed to use unstable features)
So that interface is allowed to say you asked for 128 bytes but here's 256. Or, more likely, you asked for 940 bytes, but here's 1024. So if you were trying to make a Vec<TwentyByteThing> and Vec::with_capacity(47) it would be practical to adjust this so that when the allocator has 1024 bytes available but not 940 we get back a Vec with capacity 51 not 47.
By contrast reserve will allocate space for the extra elements following the growth strategy. If you reserve(100) on an empty Vec the allocation will be able to actually insert 128 (assuming the growth strategy is pow(n))
Vec::reserve(100) on an empty Vec will give you capacity 100, not 128 even though our amortization is indeed doubling.
The rules go roughly like this, suppose length is L, present capacity is C, reserve(N):
1. L + N < C ? Enough capacity already, we're done, return
2. L + N <= C * 2 ? Ordinary doubling, grow to capacity C * 2
3. Otherwise, try to grow to L + N
This means we can grow any amount more quickly than the amortized growth strategy or at the same speed - but never less quickly. We can go 100, 250, 600, 1300 and we can go 100, 200, 400, 800, 1600 - but we can''t do 100, 150, 200, 250, 300, 350, 400, 450, 500...
reserve_exact() reallocates by exactly what you ask for.
If you reserve() space for 1 more element a 1000 times, you will get ~30 reallocations, not 1000.
This inexact nature is useful when the total size is unknown, but you append in batches. You could implement your own amortised growth strategy, but having one built-in makes it simple for different functions to cooperate.
Surely you mean ~10 reallocations, because 2^10=1024, right?
This matters for performance reasons, and also because unsafe code is allowed to use the extra capacity of the vector -- e.g. you can write to the unused space and then call set_len to insert elements in-place. If `reserve` did not guarantee the requested capacity was allocated, this would cause buffer overflows.
The documentation for these functions is very carefully worded to explain exactly what each method does and does not guarantee. Both functions have the exact same hard guarantees: after calling the function, the vector will have at least enough capacity for the additional elements.
The difference is in what the methods try to do without guaranteeing exact behavior: reserve follows the usual amortized growth pattern, while reserve_exact tries to avoid overallocating. These are described as best-effort performance optimizations rather than guarantees you can rely on because 1) the amortized growth strategy is subject to change between platforms and compiler versions, and 2) memory allocators typically don't support arbitrarily-sized blocks of memory, and instead round up to the nearest supported size.
reserve()
> Reserves capacity for at least additional more elements to be inserted in the given Vec<T>. The collection may reserve more space to speculatively avoid frequent reallocations.
reserve_exact()
> Reserves the minimum capacity for at least additional more elements to be inserted in the given Vec<T>. Unlike reserve, this will not deliberately over-allocate to speculatively avoid frequent allocations... Prefer reserve if future insertions are expected.
So if you know you'll need n now but will probably need more later, use reserve(). If you know n is all you'll need, use reserve_exact().
Vec<T> is a contiguous growable array, so the optimization game is to minimise allocations, reallocations (to keep it contiguous), and unused allocated memory.
Do reserve and reserve_exact increment the capacity, or ensure there’s still at least that much capacity remaining?
If the former, if I reserve(1) 10 times in a row, does that mean it could be rounding up to a thousand elements (say, because of the page table size) each time?
So reserve(1) 10 times in a row will just repeatedly ensure there's enough space for at least 1 more item, which after the first time there certainly is†
There's an excellent chance the capacity check in particular got inlined, so the optimized machine code will not "actually" call a function ten times, it'll call it once and then maybe emit a single capacity check inline, the subsequent checks even a crap 1980s optimiser will go "We do not need to check that again" and skip it.
† If the Vec is full and can't grow, which really can happen for Zero Size Types in particular, then this panics, and what happens next is a compile time choice, in debug likely it just tells you what went wrong and suggests how to make a backtrace. In the ordinary case that we instead got to keep running then it succeeded.
Nice. I truly do appreciate the developer ergonomics that went into Rust. Its APIs are pleasantly well thought out.
You will see armies of fanatics voting negative.
The fifth and eighth articles are explicitly negative about Rust. The seventh is about serious bugs in the Rust ecosystem.
Let's look at the top comment on the highest post: https://www.reddit.com/r/rust/comments/1cdqdsi/lessons_learn...
It's Josh Triplett, long time team member. It starts like this:
> First of all, thank you very much for taking the time to write this post. People who leave Rust usually don't write about the issues they have, and that's a huge problem for us, because it means we mostly hear from the people who had problems that weren't serious enough to drive them away. Thank you, seriously, for caring enough to explain the issues you had in detail.
This is a very different vibe than the one you're describing.
It's true that the Rust subreddit can brush criticism off, but that's also because there are a lot of low-quality criticisms of Rust, and seeing the same thing over and over again can be frustrating. But I've never seen a well thought out post that's critical get downvoted, they're often upvoted, and discussed in a very normal way.
That said, it's reddit, so there's always gonna be some garbage posts.
The Rust community is doing a fantastic job of training both the next generation as well as those looking to dip their toes in.
It's been replaced with a "by example" approach. It's much easier to teach it: just add a fake field that acts if you had this type in your struct. Rust then figures out all of the details it needs.
I had a loose grasp on variance then, didn't teach it well, and the team didn't understand it either. Among other things, it made even very early and unsound TypeScript pretty attractive just because we didn't have to annotate type variance!
I'm happy with Rust's solution here! Lifetimes and Fn types (especially together) seem to be the main place where variance comes up as a concept that you have to explicitly think about.
Assuming Parent <- Child ("<-" denotes inheritance):
- If Generic<Parent> <- Generic<Child>: it's covariant.
- If Generic<Parent> -> Generic<Child>: it's contravariant.
- Otherwise: it's invariant.
Or at least it's that straightforward in C#. Are there complications in Rust?
Scala was the first language in my exposure to try to simplify that away by lifting the variance annotations into the type parameter directly. It reduced some of the power but it made things easier to understand for developers. A full variance model would annotate specific features (methods) of a type as co/contra/invariant.
I'm not sure what approach C# takes - I haven't looked into it.
Rust doesn't expose variances for data structures at all. It exposes them for traits (type classes) and lifetimes, but neither of those are accessible in a higher order form. Trait usage is highly constrained and so is lifetime usage.
Traits mainly can be used as bounds on types. Some subset of traits, characterized as object traits, can be used as types themselves, but only in an indirect context. These are highly restricted. For example, if you have traits T and U where U extends T, and you have a reference `&dyn U`, you can't convert that to a `&dyn T` without knowledge of the underlying concrete type. You _can_ convert `&A where A: U` to `&B where B: T`, but that just falls out of the fact that the compiler has access to all the concrete type and trait definitions there and can validate the transform.
Rust's inheritance/variance story is a bit weird. They've kept it very minimal and constrained.
No it isn't. The type is covariant because a reference is a subtype of all parent types. The the function read must have it's argument invariant because it both takes and returns an instance of the type. I think you're confusing the variance of types for the variance of instances of those types. Read is effectively a Function(t: type, Function(ref t, t)). If it was covariant as you suggest, we would have a serious problem. Consider that (read Child) works by making a memory read for the first (sizeof Child) bytes of it's argument. If read were covariant, then that would imply you could call (read Child) on a Parent type and get a Parent back, but that won't work because (sizeof Child) can be less than (sizeof Parent). Read simply appears covariant because it's generic. (read Child) ≠ (read Parent), but you can get (read Parent). It also appears contravariant because you can get (read Grandchild).
Scala doesn't simplify anything, that's just how variance works.
The discussion about the type sizes is a red herring. If the type system in question makes two types of differing sizes not able to be subtypes of each other, then calling these things "Child" and "Parent" is just a labeling confusion on types unrelated by a subtyping relationship. The discussion doesn't apply at all to that case.
The variance is a property of the algorithm with respect to the type. A piece of code that accepts a reference to some type A and only ever reads from it can correctly accept a reference to a subtype of A.
A piece of code that accepts a reference to a type A and only ever writes to it can correctly accept a reference to a supertype of A.
In an OO language, an instance method is covariant whenever the subject type occurs in return position (read analogue), and it's contravariant whenever the subject type occurs in a parameter position (write analogue). On instance methods where both are present, you naturally reduce to the intersection of those two sets, which causes them to be annotated as invariant.
The same is true of a piece of code that writes through the reference or returns it. That's how sub-typing works.
> A piece of code that accepts a reference to a type A and only ever writes to it can correctly accept a reference to a supertype of A.
Have you ever programmed in a language with subtyping? Let me show you an example from Java (a popular object oriented programming language).
class Parent {}
class Child {
int x;
}
class Example {
static void writeToChild(Child c) {
c.x = 20;
}
static void main() {
writeToChild(new Parent());
}
}
This code snippet doesn't compile, but suppose the compiler allowed us to do so, do you think it could work? No. The function writeToChild cannot accept a reference to the supertype even though it only writes through the reference.I've seen a lot of people in this comment section talking about read and write which I find really odd. They have nothing to do with variance. The contravariant property is a property of function values and their parameters. It is entirely unrelated to the body of the function. A language without higher order functions will actually never have a contravariant type within it. This is why many popular OOP languages do not have them.
Anyway, Rust has no subtyping except for lifetimes ('a <: 'b iff 'a lasts at least as long as 'b) , so variance only arises in very advanced use cases.
The simplest example of incorrect variance → UB is that `&'a mut T` must be invariant in T. If it were covariant, you could take a `&'a mut &'static T`, write a `&'b T` into it for some non-static lifetime `'b` (since `'static: 'b` for all `'b`), and then... kaboom. 'b ends but the compiler thought this was a `&'a mut &'static T`, and you've got a dangling reference.
`&'a mut T` can't be covariant in T for a similar reason: if you start with a `&'a mut &'b T`, contravariance would let you cast it to a `&'a mut &'static T`, and then you'd have a `&'static T` derived from a `&'b T`, which is again kaboom territory.
So, variance’s effect is to guide the compiler and prevent dangling references from occurring at runtime by making code that produces them invalid. Neither of the above issues is observable at runtime (barring compiler bugs) precisely because the compiler enforces variance correctly.
It's like how I can hold in my head how classical DH KEX works and I can write a toy version with numbers that are too small - but for the actual KEX we use today, which is Elliptic Curve DH I'm like "Well, basically it's the same idea but the curves hurt my head so I just paste in somebody else's implementation" even in a toy.
Sorry?
Got a Container<Animal> and want to treat it as a Container<Cat>? Then you can only write to it: it's ok to put a Cat in a Container<Cat> that's really a Container<Animal>.
Reading from it is wrong. If you treat a Container<Animal> like a Container<Cat> and read from it, you might get a Dog instead.
The same works in reverse, treating a Container<Cat> like a Container<Animal> is ok for read, but not for write.
this always annoyed me about the python type annotations, you are supposed to already know what contravariant / covariant / invariant means, like: `typing.TypeVar(name, covariant=False, contravariant=False, infer_variance=False)`
Its used in documentation and error messages too:
> the SendType of Generator behaves contravariantly, not covariantly or invariantly.
https://docs.python.org/3/library/typing.html#annotating-gen...
That's horrible design. I was utterly perplexed whenever the compiler asked me to add one of those strange fields to a struct. If it had just asked me to include the variance in generic parameters, I would have had no such confusion. Asking programmers to learn the meaning of an important concept in programming is entirely reasonable, especially in Rust which is a language for advanced programmers that expects knowledge of many other more complicated things.
What's more, the implicit variance approach might create dangerous code. It is possible to break the interface for a module without making any change to its type signature. The entire point of types is to provide a basic contract for the behaviour of an object. A type system with variance in it that doesn't let you write it down in the signature is deficient in this property.
Lifetimes imply the need for the idea of variance.
Separately, if everything were invariant, you wouldn't be able to use a `&'static T` where a `&'non_static T` was expected, which would be quite unpleasant!
Another example, I was trying to see how i64::isqrt is implemented, but first you have to wade through layers of macros
Same question about other «signal» types.
compiler + standard library (core library to be more specific)
`NonNull` is mostly "just" a library type in the core libary, but it uses two unstable/nightly attributes to tell the compiler about it's special properties (see other answers, also to be clear its not a `lang_item` attribute but something other types can use, too).
The special thing about core/std in rust is that due to it being part of the specific compiler version it can use (some, careful considered) subset of unstable features as long as they work for that specific use case and don't "leak"/accidentally stabilize through it.
But this attributes are in progress of getting replaces with a new mechanism of pattern annotations. I.e. NonNull will be annotated with attribute which contains a `1..` pattern (i.e. value is 1 or larger) but other patterns are getting support, too. (The second attribute to enforce/guarantee niche optimizations might still be needed for stability guarantees.)
And I think??? there are plans to stabilize this new mechanism, but I'm absolutely not sure what the state of this is (RFC accepted?, stabilization open/likely/decided? implementation done/partial?). I'm currently not in the loop.
So in the long future you probably can write your own types with other niches e.g. a u8 with `..127` (MSB unused).
#[rustc_layout_scalar_valid_range_start(1)]
This gives rustc enough information to perform niche optimizations such as collapsing Option<NonNull<T>>.You can technically use those for your own types, but it requires nightly, obviously.
If you have a USHoliday enum, with values like Juneteenth the fact is there's not that many US national holidays, so both USHoliday and Option<USHoliday> will be the same size, one byte with no extra work.
#[rustc_layout_scalar_valid_range_start(1)]
#[rustc_nonnull_optimization_guaranteed]
This declares that NonNull cannot be outside of the range (1..). These magic attributes are only allowed to be used by the stdlib, and they're direct compiler instructions. Nonnull is special because the optimisation is guaranteed so the compiler does have special code to handle it, but other types can be optimised without compiler changes.Edit: RAII
And again, the best way to explain this is by comparison to C++
I do not like lighthearted banter, but this is not the problem. The problem is that the author has a hard time writing a single paragraph without it. They should seriously consider doing some training in technical or scientific writing.
>, I guess I understand why your username is "constant crying"
What a novel comment. But yes, I do hate the constant positivity, especially in the face of a subpar product.
As to the underlying topic, this is a very introductory article - I warrant 20 minutes with the source code, and you'd reach the same level of understanding.
Like in Vec we have following things most normal rust code wouldn't do:
- Unique<u8> instead of Unique<T>, to micro optimize code size
- RawVec, RawVecInner, due to the previous point, internal structure of libstd, relations to libcore/liballoc usage and more clean separation between unsafe and safe code etc. Which a "just my own vec" impl might not have
- internal `Unique` helper type to reuse code between various data structures in std
- the `NonNull` part is another micro optimization which might save 1-4 byte for `Option<Vec<T>>` (up to 4 due to alignment, but I think this optimization isn't guaranteed in this context)
Like it could be just a `Vec<T> { ptr: *const T, cap: usize, len: usize, _marker: PhantomData<T> }`, but that wouldn't have quite the maximized (and micro optimized) fine tuning which tends to be in std libraries but not in random other code. (I mean e.g. the code size micro-opt. is irrelevant for most rust code, but std goes into _all_ (std using) rust code, so it's needed.
But anyway, yes, unsafe rust can be hard. Especially if you go beyond "straight forward C bindings" and write clever micro optimized data structures. It's also normally not needed.
But also "straight forward C bindings" can be very okay from complexity _as long as you don't try to be clever_ (which in part isn't even rust specific, it tends to just become visible in rust).
If you were to implement Vec without layering, it would be no more complicated than writing a dynamic array in C (but more verbose due to all the unsafe blocks and unabbreviated names)
If you look at an advanced C library, you'll see perhaps some odd tricks or nifty algorithms, but you probably won't see something that leaves you scratching your head about what the code is even asking the computer to do.
Here's what it looked like at 1.0.0: https://github.com/rust-lang/rust/blob/1.0.0/src/libcollecti...
For example MIRI can understand pointer twiddling under strict provenance. It'll synthesize provenance for the not-really-pointers it is working with and check what you wrote works when executed. But if your algorithm can't work with strict provenance (and thus wouldn't work for CHERI hardware for example) but your target is maybe an x86-64 Windows PC so you don't care, you can write exposed or even just arbitrary "You Gotta Believe Me" pointer twiddling, MIRI just can't check it any more when you do that, so, hope you never make mistakes.
I bet Vec's implementation is full of unsafe blocks and careful invariant management. Users face different hoops: lifetime wrangling, fighting the borrow checker on valid patterns, Pin semantics, etc. Rust trades runtime overhead for upfront costs: compile-time complexity and developer time wrestling with the borrow checker which is often the right trade, but it's not hoop-free.
The "close to hardware" claim needs qualification too. You're close to hardware through abstractions that hide significant complexity. Ada/SPARK gives formal correctness guarantees but requires proof effort and runtime checks (unless you use SPARK which is a subset of Ada). C gives you actual hardware access but manual memory management. Each has trade-offs - Rust's aren't magically absent.
One should wonder what else is there... but they do not wonder, they are sadly rigid with their thinking that Rust is the perfect memory safe language with zero runtime overhead.
Plenty of people wonder. It’s part of building the language.
And you can wonder, is this accidental complexity? Or is this necessary complexity?
From what I heard online I was expecting it to be a lot harder to understand!
The moving/borrowing/stack/heap stuff isn't simple by any means, and I'm sure as I go it will get even harder, but it's just not as daunting as I'd expected
It helps that I love compiler errors and Rust is full of them :D Every error the compiler catches is an error my QA/users don't
Over the course of the last decade I've made several attempts to learn Rust, but was always frustrated with having code that reasonably should compile, but didn't and I had to run the build to even find that out.
Now we have rust-analyzer and non-lexical lifetimes, both tremendously improving the overall experience.
I still don't enjoy the fact that borrows are at the struct level, so you can't just borrow one field (there's even a discussion on that somewhere in Rust's) repo, but I can work around that.
To drive the point home: I'm a frontend developer. This is a very different environment compared to what I'm used to, yet I can be productive in it, even if at a slower pace in comparison.
I can code much faster in almost any language compared to Rust. It creates mental overhead. For example, to compare it to siblings, Swift and C++ (yes, even C++, but with a bit of knowledge of good practices) lets you produce stuff more easily than Rust.
It is just that Rust, compared to C++, comes extra-hardened. But now go get refactor your Rust code if you started to add lifetimes around... things get coupled quickly. It is particularly good at hardening and particularly bad at prototyping.
They seem to be the opposite in the absence of borrowing without a garbage collector, since you need to mark the lifetime in some way.
In fact, most of the time, I would favor styles like this even for not prototyping.
At the end, refactoring is also something natural. I would reserve lifetimes for the very obvious cases wirh controlled propagation or for performance-sensitve spots.
amen! I despise Python even though I used to love it, and that's because it's full of runtime errors and unchecked exceptions. The very instant I learned more strongly typed languages, I decided never to go back.
Statically typing things is great, and I enjoy it too, when it is practical, but I don't enjoy it, when it becomes more complicated than the actual thing I want to express, due to how the type system of that particular language or third party tool (in Python) works.