Go has sub-second build times even on massive code-bases. Why? because it doesn't do a lot at build time. It has a simple module system, (relatively) simple type system, and leaves a whole bunch of stuff be handled by the GC at runtime. It's great for its intended use case.
When you have things like macros, advanced type systems, and want robustness guarantees at build time.. then you have to pay for that.
A big reason that amalgamation builds of C and C++ can absolutely fly is because they aren't reparsing headers and generating exactly one object file so the linker has no work to do.
Once you add static linking to the toolchain (in all of its forms) things get really fucking slow.
Codegen is also a problem. Rust tends to generate a lot more code than C or C++, so while the compiler is done doing most of its typechecking work, the backend and assembler has a lot of things to chuck through.
Could you expand on that, please? Every time you run dynmically linked program, it is linked at runtime. (unless it explicitly avoids linking unneccessary stuff by dlopening things lazily; which pretty much never happens). If it is fine to link on every program launch, linking at build time should not be a problem at all.
If you want to have link time optimization, that's another story. But you absolutely don't have to do that if you care about build speed.
Wouldn't you say a lot of that comes from the macros and (by way of monomorphisation) the type system?
This has tradeoffs: increased ABI stability at the cost of longer compile times.
I’d like to see tooling for this to pinpoint bottlenecks - it’s not always obvious what’s making builds slow.
If it improves compile time, that sounds like a bug in the compiler or the design of the language itself.
I second this enthusiastically.
Even this can lead to unworkable compile times, to the point that code is rewritten.
I can believe that, but even so it's caused by the type system monomorphising everything. When it use qsort from libc, you are using per-compiled code from a library. When you use slice::sort(), you get custom assembler compiled to suit your application. Thus, there is a lot more code generation going on, and that is caused by the tradeoffs they've made with the type system.
Rusts approach give you all sorts of advantages, like fast code and strong compile time type checking. But it comes with warts too, like fat binaries, and a bug in slice::sort() can't be fixed by just shipping of the std dynamic library, because there is no such library. It's been recompiled, just for you.
FWIW, modern C++ (like boost) that places everything in templates in .h files suffers from the same problem. If Swift suffers from it too, I'd wager it's the same cause.
But not having to is a win, as the monomorphised sorts are just much faster at runtime than having to do an indirect call for each comparison.
I was all excited to conduct the "cargo check; mrustc; cc" is 100x faster experiment, but I think at best, the multiple is going to be pretty small.
The compiler is optimized for compilation speed, not runtime performance. Generally speaking, it does well enough. Especially because it's usecase is often applications where "good enough" is good enough (IE, IO heavy applications).
You can see that with "gccgo". Slower to compile, faster to run.
For pure computational workloads, it'll be faster. However, anything with heavy allocation will suffer as apparently the gccgo GC and GC related optimizations aren't as good as cgo's.
Since fast compilation was a goal, every part of the design was looked at through a rough "can this be a horrible bottleneck?", and discarded if so. For example, the import (package) system was designed to avoid the horrible, inefficient mess of C++. It's obvious that you never want to compile the same package more than once and that you need to support parallel package compilation. These may be blindingly obvious, but if you don't think about compilation speed at design time, you'll get this wrong and will never be able to fix it.
As far as optimizations vs compile speed goes, it's just a simple case of diminishing returns. Since Rust has maximum possible perfomance as a goal, it's forced to go well into the diminishing returns territory, sacrificing a ton of compile speed for minor performance improvements. Go has far more modest performance goals, so it can get 80% of the possible performance for only 20% of the compile cost. Rust can't afford to relax its stance because it's competing with languages like C++, and to some extent C, that are willing to go to any length to squeeze out an extra 1% of perfomance.
Unless you use sqlite, in which case your build takes a million years.
It's not that it can't be done but that it usually is not worth the hassle and our goal should be for compilation to be fast despite not everything being in one file.
Turbo Pascal is a prime example for a compiler that won the market not least because of its - for the time - outstanding compilation speed.
In the same vein, a language can be designed for fast compilation. Pascal in general was designed for single-pass compilation which made it naturally fast. All the necessary forward declarations were a pain though and the victory of languages that are not designed for single-pass compilation proofs that while doable it was not worth it in the end.
The overall principle is sound though: it's true that doing some work is more than doing no work. But the borrow checker and other safety checks are not the root of compile time performance in Rust.
Stuff like inserting bounds checking puts more work on the optimization passes and codegen backend as it simply has to deal with more instructions. And that then puts more symbols and larger sections in the input to the linker, slowing that down. Even if the frontend "proves" it's unnecessary that calculation isn't free. Many of those features are related to "safety" due to the goals of the language. I doubt the syntax itself really makes much of a difference as the parser isn't normally high on the profiled times either.
Generally it provides stricter checks that are normally punted to a linter tool in the c/c++ world - and nobody has accused clang-tidy of being fast :P
But it _is_ about the sheer volume of stuff passed to LLVM, as you say, which comes from a couple of places, mostly related to monomorphization (generics), but also many calls to tiny inlined functions. Incidentally, this is also what makes many "modern" C++ projects slow to compile.
In my experience, similarly sized Rust and C++ projects seem to see similar compilation times. Sometimes C++ wins due to better parallelization (translation units in Rust are crates, not source files).
* Make no nested types - these slow compiler time a lot
* Include no crates, or ones that emphasize compiler speed
C is still v. fast though. That's why I love it (and Rust).
As an example, say your function takes anything that can be turned into a String. You'd write a generic wrapper that does the ToString step, then change the existing function to just take a String. That way when your function is called, only the thin outer function is monomorphised, and the bulk of the work is a single implementation.
It's not _that_ commonly known, as it only becomes a problem for a library that becomes popular.
1. Use pointers and do not include header file for class, if you need pointer to that class. I think that's a pretty established pattern in C++. So if you want to declare pointer to a class in your header, you just write `class SomeClass;` instead of `#include "SomeClass.hpp"`.
2. Do not use STL or IOstreams. That project used only libc and POSIX API. I know that author really hated STL and considered it a huge mistake to be included to the standard language.
3. Avoid generic templates unless absolutely necessary. Templates force you to write your code in header file, so it'll be parsed multiple times for every include, compiled to multiple copies, etc. And even when you use templates, try to split the class to generic and non-generic part, so some code could be moved from header to source. Generally prefer run-time polymorphism to generic compile-time polymorphism.
That's my 2000s development experience. Fortunately I've spent a good chunk of the 2010s and most of the 2020s using other languages.
The classic XKCD compilation comic exists for a reason.
Have you tried troubleshooting a compiler error in a unity build?
Yeah.
There's some other dependencies in there that are only used when building for test/benchmarking like serde, zstd, and criterion. You would need to be certain you're building only the library and not the test harness to be sure those aren't being built too.
The simple truth is a C compiler doesn’t need to do very much!
Maybe it's a MSVC thing - it does seem to have some multi-threading stuff. In any case raddbg non-clean builds take longer than any of my rust projects.
If you want to see the difference download unreal engine and compile the editor with and without unity builds enabled.
My experience has been the polar opposite of yours - similar size rust projects are an order of magnitude slower than C++ ones. Could you share an example of a project to compare with?
https://codeload.github.com/EpicGamesExt/raddebugger/tar.gz/...
One of the primary features of Rust is the extensive compile-time checking. Monomorphization is also a complex operation, which is not exclusive to Rust.
C compile times should be very fast because it's a relatively low-level language.
On the grand scale of programming languages and their compile-time complexity, C code is closer to assembly language than modern languages like Rust or Swift.
Do you really believe that nobody over the course of Rust's lifetime has ever taken a look at C compilers and thought about if techniques they use could apply to the Rust compiler?
Unity builds are useful for C programs because they tend to reduce header processing overhead, whereas Rust does not have the preprocessor or header files at all.
They also can help with reducing the number of object files (down to one from many), so that the linker has less work to do, this is already sort of done (though not to literally one) due to what I mentioned above.
In general, the conventional advice is to do the exact opposite: breaking large Rust projects into more, smaller compilation units can help do less "spurious" rebuilding, so smaller changes have less overall impact.
Basically, Rust's compile time issues lie elsewhere.
I'm not sure what Rust or docker have to do with this basic issue, it just feels like young blood attempting 2020 solutions before exploring 1970 solutions.
The rust compiler is actually pretty fast for all the work it's doing. It's just an absolutely insane amount of additional work. You shouldn't expect it to compile as fast as C.
For the kind of work I do — writing servers, networking, and glue code — fast compilation is absolutely paramount. At the same time, I want some type safety, but not the overly obnoxious kind that won’t let me sloppily prototype. Also, the GC helps. So I’ll gladly pay the price. Not having to deal with sigil soup is another plus point.
I guess Google’s years of experience led to the conclusion that, for software development to scale, a simple type system, GC, and wicked fast compilation speed are more important than raw runtime throughput and semantic correctness. Given the amount of networking and large - scale infrastructure software written in Go, I think they absolutely nailed it.
But of course there are places where GC can’t be tolerated or correctness matters more than development speed. But I don’t work in that arena and am quite happy with the tradeoffs that Go made.
Well, that point in the design space was already occupied by Java which also has extremely fast builds. Go exists primarily because the designers wanted to make a new programming language, as far as I can tell. It has some nice implementation aspects but it picked up its users mostly from the Python/Ruby/JS world rather than C/C++/Java, which was the original target market they had in mind (i.e. Google servers). Scripting language users were in the market for a language that had a type system but not one that was too advanced, and which kept the scripting "feel" of very fast turnaround times. But not Java because that was old and unhip, and all the interesting intellectual space like writing libs/conf talks was camped on already.
And you left out classloader/classpath/JAR dependency hell, which was horrid circa late 90s/early 2000s...and I'm guessing was still a struggle when Go really started development. Especially at Google's scale.
Don't get me wrong, Java has come a long way and is a fine language and the JVM is fantastic. But the java of 2025 is not the same as mid-to-late 2000s.
I'm a fan of Go, but I don't think it's the product of some awesome collective Google wisdom and experience. Had it been, I think they'd have come to the conclusion that statically eliminating null pointer exceptions was a worthwhile endeavor, just to mention one thing. Instead, I think it's just the product of some people at Google making a language they way they wanted to.
Nowadays the culture seems to have evolved a bit. I now go into high alert mode if I see a channel cross a function boundary or a goroutine that wasn't created via errgroup or similar.
People also seem to have chilled out about the "share by communicating" thing. It's usually better to just use a mutex and I think people recognise that now.
> Types either represent the data or not
This definitely required, but is only really the first step. Where types get really useful is when you need to change them later on. The key aspects here are how easily you can change them, and how much the language tooling can help.
It's just a project from a few very talented people who happen to draw their salary from Google's coffers.
"Unfortunately, this will rebuild everything from scratch whenever there's any change."
In this situation, with only one person as the builder, with no need for CI or CD or whatever, there's nothing wrong with building locally with all the local conveniences and just slurping the result into a docker container. Double-check any settings that may accidentally add paths if the paths have anything that would bother you. (In my case it would merely reveal that, yes, someone with my username built it and they have a "src" directory... you can tell how worried I am about both those tidbits by the fact I just posted them publicly.)
It's good for CI/CD in a professional setting to ensure that you can build a project from a hard drive, a magnetic needle, and a monkey trained to scratch a minimal kernel on to it, and boot strap from there, but personal projects don't need that.
Even at work, I have a few projects where we had to build a Java uber jar (all the dependencies bundled into one big far) and when we need it containerized we just copy the jar in.
I honestly don't see much reason to do builds in the container unless there is some limitation in my CICD pipeline where I don't have access to necessary build tools.
If you now copy your binary to the container and it implicitly expects there to be a shared library in /usr/lib or wherever, it could blow up at runtime because of a library version mismatch.
When developing locally, use `cargo test` in your cli. When deploying to the server, build the Docker image on CI. If it takes 5 minutes to build it, so be it.
> So instead, I'd like to switch to deploying my website with containers (be it Docker, Kubernetes, or otherwise), matching the vast majority of software deployed any time in the last decade.
Containers offer many benefits. To name some: process isolation, increased security, standardized logging and mature horizontal scalability.
First stage compiles the code. This is good for isolation and reproducibility.
Second stage is a lightweight container to run the compiled binary.
Why is the author being attacked (by multiple comments) for not making things simpler when that was not claimed that as the goal. They are modernizing it.
Containers are good practice for CI/CD anyway.
Don't do what you don't need to do.
They are already long past the point of "complicate things unnecessarily".
A simple Dockerfile pales in comparison.
Docker is a (the, in some areas) modern way to do it, but far from the only way.
you can enable word wrapping as a workaround ( `:set wrap`). Lifehack: it can be hard to navigate in such file with just `h, j, k, l`, but you can use `gh, gj, etc`. With `g` vim will work with visual lines, while without it with just lines splitted with LF/CRLF
"Make k/j up/down work more naturally by going to the next displayed line vs
"going to the next logical line (for when word-wrapping is on):
noremap k gk
noremap j gj
noremap <up> gk
noremap <down> gj
"Same as above, but for arrow keys in insert mode:
inoremap <up> <Esc>gka
inoremap <down> <Esc>gja
Works great with docker: upon new compiler version or major website update, rebuild the layer with the incremental cache; otherwise just run from the snapshot and build newest website update version/state, and upload/deploy the resulting static binary. Just set so that mere code changes won't force rebuilding the layer that caches/materializes the fresh clean build's incremental compilation cache.
andy@bark ~/d/andrewkelley.me (master)> zig build --watch -fincremental
Build Summary: 3/3 steps succeeded
install success
└─ run exe compile success 57ms MaxRSS:3M
└─ compile exe compile Debug native success 331ms
Build Summary: 3/3 steps succeeded
install success
└─ run exe compile success 56ms MaxRSS:3M
└─ compile exe compile Debug native success 17ms
watching 75 directories, 1 processes
Edit: apparently I am replying to the main Zig author? Language evangelism is by far the worst part of Rust and has likely stirred up more anti Rust sentiment than “converting” people to Rust. If you truly care for your language you should use whatever leverage you have to steer your community away from evangelism, not embrace it.
This comment would be a lot better if it engaged with the posted article, or really had any sort of insight beyond a single compile time metric. What do you want me to take away from your comment? Zig good and Rust bad?
> A brief note: 50 seconds is fine, actually!
50 seconds should actually not be considered fine.
Now we get all of this off-topic discussion about Zig. Which I guess is good for you Zig folk... But it's pretty off-putting for me.
whoisyc's comment is extremely on point. As the VP of community, I would really encourage thinking about what they said.
Having concrete proof that something can be done more efficiently is extremely important and, no, I haven't "demonstrated" anything, since my earlier comment would have had way less substance to it without the previous context.
The comment from Andrew is not just random compiler stats, but a datapoint showing a comparable example having dramatically different performance characteristics.
You can find in this very HN submission various comments that assume that Rust's compiler performance is impossible to improve because of reasons that actually are mostly (if not entirely) irrelevant. Case in point, see people talking about how Rust compilation must take longer because of the borrow checker (and other safety checks) and Steve pointing out that, no, actually that part of the compilation pipeline is very small.
> Now we get all of this off-topic discussion about Zig.
So no, I would argue the opposite: this discussion is very much on topic.
One major difference is the way each project considers compiler performance:
The Rust team has always cared to some degree about this. But, from my recollection of many RFCs, "how does this impact compiler performance" wasn't a first-class concern. And that also doesn't really speak to a lot of the features that were basically implemented before the RFC system existed. So while it's important, it's secondary to other things. And so while a bunch of hard-working people have put in a ton of work to improve performance, they also run up against these more fundamental limitations at the limit.
Andrew has pretty clearly made compiler performance a first-class concern, and that's affected language design decisions. Naturally this leads to a very performant compiler.
Do you have a list off the top of your head/do you know of a decent list? I've now read many "compiler slow" thoughtpieces by many people and I have yet to see someone point at a specific feature and say "this is just intrinsically harder".
I believe that it likely exists, but would be good to know what feature to get mad at! Half joking of course
You can't have a language with 100% of the possible runtime perf, 100% of the possible compile speed and 100% of the possible programmer ease-of-use.
At best you can abuse the law of diminishing returns aka the 80-20 rule, but that's not easy to balance and you run the risk of creating a language that's okay at everything, but without any strong selling points, like the stellar runtime performance Rust is known for.
So a better way to think about it is: Given Rust's numerous benefits, is having subpar compilation time really that big of a deal?
That's not what I said. I said it's unlikely that fast compilation cannot be achieved while using LLVM which, I would argue, is proven by the existence of a fast compiler that uses LLVM.
this is a terrible look for your whole community
All Zig code is built in a single compilation unit and everything is compiled from scratch every time you change something, including all dependencies and all the parts of the stdlib that you use in your project.
So you've been comparing Zig rebuilds that do all the work every time with Rust rebuilds that cache all dependencies.
Once incremental is fully released you will see instant rebuilds.
And it can be considerably faster if you use something like subsecond[0] (which does incremental linking and hotpatches the running binary). It's not quite as fast as Zig, but it's close.
However, if that 331ms build above is a clean (uncached) build then that's a lot faster than a clean build of my website which takes ~12s.
What I'm more interested to know is what the runtime performance tradeoff is like now; one really has to assume that it's slower than LLVM-generated code, otherwise that monumental achievement seems to have somehow been eclipsed in very short time, with much shorter compile times to boot.
Your first claim is unverifiable and the second one is just so, so wrong. Even big projects with very talented, well-paid C or C++ devs eventually end up with CVEs, ~80% of them memory-related. Humans are just not capable of 0% error rate in their code.
If Zig somehow got more popular than C/C++, we would still be stuck in the same CVE bog because of memory unsafety. No thank you.
Zig does a lot of things to prevent or detect memory safety related bugs. I personally haven't encountered a single one so far, while learning the language.
> ~80% of them memory-related.
I assume you're referencing the 70% that MS has published? I think they categorized null pointer exceptions as memory safety bugs as well among other things. Zig is strict about those, has error unions, and is strict and explicit around casting. It can also detect memory leaks and use after free among other things. It's a language that's very explicit about a lot of things, such as control flow, allocation strategies etc. And there's comptime, which is a very potent tool to guarantee all sorts of things that go well beyond memory safety.
I almost want to say that your comment presents a false dichotomy in terms of the safety concern, but I'm not an expert in either Rust or Zig. I think however it's a bit broad and unfair.
> Fil-C achieves this using a combination of concurrent garbage collection and invisible capabilities (each pointer in memory has a corresponding capability, not visible to the C address space)
With significant performance and memory overhead. That just isn't the same ballpark that Rust is playing in although hugely important if you want to bring forward performance insensitive C code into a more secure execution environment.
> Fil-C is currently 1.5x slower than normal C in good cases, and about 4x slower in the worst cases.
with room for optimization still. Compatibility has improved massively too, due to big changes to how it works. The early versions were kind of toys, but if Filip's claims about the current version hold up then this is starting to look like a very useful bit of kit. And he has the kind of background that means we should take this seriously. There's a LOT of use cases for taking stuff written in C and eliminating memory safety issues for only a 50% slowdown.
Until you guys write an actual formal specification, the compiler is the language.
The project is adopting Ferrocene for the spec.
Yes, the soundness hole itself is low impact and doesn't need to be prioritized but it undermines the binary "Zig is definitively not memory-safe, while safe Rust, is, by definition, memory-safe" argument that was made in response to me. Now you're dealing with qualitative / quantitative questions of practical impact, in which my original statement holds: "Zig is less memory safe than Rust, but more than C/C++. Neither Zig nor Rust is fundamentally memory safe."
You can of course declare that Safe Rust is by definition memory safe, but that doesn't make it any more true than declaring that Rust solves the halting problem or that it proves P=NP. RustBelt is proven sound. Rust by contrast, as being documented by Ferrocene, is currently fundamentally unsound (though you won't hit the soundness issues in practice).
But by implementation and spec definitely not.
Rust is a large and robust language meant for serious systems programming. The scope of problems Rust addresses is large, and Rust seeks to be deployed to very large scale software problems.
These two are not the same and do not merit an apples to apples comparison.
edit: I made some changes to my phrasing. I described Zig as a "toy" language, which wasn't the right wording.
These languages are at different stages of maturity, have different levels of complexity, and have different customers. They shouldn't be measured against each other so superficially.
(EDIT: The parent has since edited this comment to contain more than just "zig bad rust good", but I still think the combative-ness and insulting tone at the time I made this comment isn't cool.)
Respectfully, the parent only offers up a Zig compile time metric. That's it. That's the entire comment.
This HN post about Rust is now being dominated by a cheap shot Zig one liner humblebrag from the lead author of Zig.
I think this thread needs a little more nuance.
Being frustrated by perceived bad behavior doesn't mean responding with more bad behavior is a good way to improve the discourse, if that's your goal here.
That's correct, but slinging cheap shots at each other is not how discussions on this site are supposed to be.
> I think this thread needs a little more nuance.
Yes, but your comment offers none.
I don't know enough Rust, but I find these aspects are seriously lacking in C++ on Linux, and it is one of the few things I think Windows has it better for developers. Is Rust better?
Relevant: Subsecond: A runtime hotpatching engine for Rust hot-reloading - https://news.ycombinator.com/item?id=44369642 - June, 2024 (36 comments)
> Full IDE?
https://www.jetbrains.com/rust/ (newly free for non-commercial use)
> find these aspects are seriously lacking in C++ on Linux
https://www.jetbrains.com/clion/ (same, non-commercial)
I've only ever really used a debugger on embedded, we used gdb there. I know VS: Code has a debugger that works, I'm sure other IDEs do too.
> edit and continue
Hard to do in a pre-compiled language with no runtime, if you're asking about what I think you're asking about.
> Hot reload
Other folks gave you good links, but this stuff is pretty new, so I wouldn't claim that this is great and often good and such.
> Full IDE
I'm not aware of Rust-specific IDEs, but many IDEs have good support for Rust. VS: Code is the most popular amongst users according to the annual survey. The Rust Project distributes an official LSP server, so you can use that with any editor that supports it.
A.k.a. "Remember the Vasa!" https://news.ycombinator.com/item?id=17172057
New features: yes
Talking to users and fixing actual problems: lolno, I CBF
Oops, changed one template in one header. And that impacts.... 98% of my code.
https://news.ycombinator.com/item?id=44234080
(Rust compiler performance; 287 points, 261 comments)
My 2c on this is nearly ditching rust for game development due to the compile times, in digging it turned out that LLVM is very slow regardless of opt level. Indeed it's what the Jai devs have been saying.
So Cranelift might be relevant for OP, I will shill it endlessly, took my game from 16 seconds to 4 seconds. Incredible work Cranelift team.
https://github.com/TheBevyFlock/bevy_simple_subsecond_system
Performance matters.
But it’s also probable that 16 seconds was fairly early in development and it would get much worse from there.
The slowness is because everyone has to write code with generics and macros in Java Enterprise style in order to show they are smart with rust.
This is really sad to see but most libraries abuse codegen features really hard.
You have to write a lot of things manually if you want fast compilation in rust.
Compilation speed of code just doesn’t seem to be a priority in general with the community.
Refactoring seems to take about the same time too so no loss on that front. After all is said and done I'm just left with various logic bugs to fix which is par for the course (at least for me) and a sense of wondering if I actually did everything properly.
I suppose maybe two years from now we'll have people that suggest avoiding generics and tempering macro usage. These days most people have heard the advice about not stressing over cloning and unwraping (though expect is much better imo) on the first pass more or less.
Something something shiny tool syndrome?
I think this post (accidentally?) conflates two different sources of slowness:
1) Building in docker 2) The compiler being "slow"
They mention they could use bind mounts, yet wanting a clean build environment - personally, I think that may be misguided. Rust with incremental builds is actually pretty fast and the time you lose fighting dockers caching would likely be made up in build times - since you'd generally build and deploy way more often than you'd fight the cache (which, you'd delete the cache and build from scratch in that case anyway)
So - for developers who build rust containers, I highly recommend either using cache mounts or building outside the container and adding just the binary to the image.
2) The compiler being slow - having experienced ocaml, go and scala for comparisons the rust compiler is slower than go and ocaml, sure, but for non interactive (ie, REPL like) workflows, this tends not to matter in my experience - realistically, using incremental builds in dev mode takes seconds, then once the code is working, you push to CI at which point you can often accept the (worst case?) scenario that it takes 20 minutes to build your container since you're free to go do other things.
So while I appreciate the deep research and great explanations, I don't think the rust compiler is actually slow, just slower than what people might be use to coming from typescript or go for example.
> To get your Rust program in a container, the typical approach you might find would be something like:
If you have `cargo build --target x86_64-unknown-linux-musl` in your build process you do not need to do this anywhere in your Dockerfile. You should compile and copy into /sbin or something.
If you really want to build in a docker image I would suggest using `cargo --target-dir=/target ...` and then run with `docker run --mount type-bind,...` and then copy out of the bind mount into /bin or wherever.
Sadly, the compile time is just as bad, but I think in this case the allocator is the biggest culprit, since disabling optimization will degrade run-time performance. The Rust team should maybe look into shipping their own bundled allocator, "native" allocators are highly unpredictable.
[^1]: https://www.fermyon.com
What? That's absolutely ideal! It's incredibly simple. I wish deployment processes were always that simple! Docker is not going to make your deployment process simpler than that.
I did enjoy the deep dive into figuring out what was taking a long time when compiling.
If anyone out there is already fully committed to using only Alpine Linux, I'd recommend trying creating native packages at least once.
The local builds are fast, why would you rebuild docker for small changes?
Also why is a personal page so much rust and so many dependencies. For a larger project with more complex stuff you’d have a test suite that takes time too. Run both in parallel in your CI and call it a day.
>Build a new statically linked binary (with --target=x86_64-unknown-linux-musl) >Copy it to my server >Restart the website
Isn't it a basic C compiler feature that you can compile a file as an Object, and then link the objects into a single executable? Then you only recompile the file you changed.
Not sure what I'm missing.
The problem has been created by Docker which destroys all of the state. If this was C, you'd also end up losing all of the object files and rebuilding them every time.
Cargo is the standard build system for Rust projects, though some users use other ones. (And some build those on top of Cargo too.)
> * Borrowing — Rust’s defining feature. Its sophisticated pointer analysis spends compile-time to make run-time safe.
> * Monomorphization — Rust translates each generic instantiation into its own machine code, creating code bloat and increasing compile time.
> * Stack unwinding — stack unwinding after unrecoverable exceptions traverses the callstack backwards and runs cleanup code. It requires lots of compile-time book-keeping and code generation.
> * Build scripts — build scripts allow arbitrary code to be run at compile-time, and pull in their own dependencies that need to be compiled. Their unknown side-effects and unknown inputs and outputs limit assumptions tools can make about them, which e.g. limits caching opportunities.
> * Macros — macros require multiple passes to expand, expand to often surprising amounts of hidden code, and impose limitations on partial parsing. Procedural macros have negative impacts similar to build scripts.
> * LLVM backend — LLVM produces good machine code, but runs relatively slowly. Relying too much on the LLVM optimizer — Rust is well-known for generating a large quantity of LLVM IR and letting LLVM optimize it away. This is exacerbated by duplication from monomorphization.
> * Split compiler/package manager — although it is normal for languages to have a package manager separate from the compiler, in Rust at least this results in both cargo and rustc having imperfect and redundant information about the overall compilation pipeline. As more parts of the pipeline are short-circuited for efficiency, more metadata needs to be transferred between instances of the compiler, mostly through the filesystem, which has overhead.
> * Per-compilation-unit code-generation — rustc generates machine code each time it compiles a crate, but it doesn’t need to — with most Rust projects being statically linked, the machine code isn’t needed until the final link step. There may be efficiencies to be achieved by completely separating analysis and code generation.
> * Single-threaded compiler — ideally, all CPUs are occupied for the entire compilation. This is not close to true with Rust today. And with the original compiler being single-threaded, the language is not as friendly to parallel compilation as it might be. There are efforts going into parallelizing the compiler, but it may never use all your cores.
> * Trait coherence — Rust’s traits have a property called “coherence”, which makes it impossible to define implementations that conflict with each other. Trait coherence imposes restrictions on where code is allowed to live. As such, it is difficult to decompose Rust abstractions into, small, easily-parallelizable compilation units.
> * Tests next to code — Rust encourages tests to reside in the same codebase as the code they are testing. With Rust’s compilation model, this requires compiling and linking that code twice, which is expensive, particularly for large crates.
[1]: https://www.pingcap.com/blog/rust-compilation-model-calamity...
xkcd is always relevant: https://xkcd.com/303/
When I had to deal with this I would just open the newspaper and read an article in front of my boss.
Personally I don't care anymore, since I do hotpatching:
https://lib.rs/crates/subsecond
Zig is faster, but then again, Zig isn't memory save, so personally I don't care. It's an impressive language, I love the syntax, the simplicity. But I don't trust myself to keep all the memory relevant invariants in my head anymore as I used to do many years ago. So Zig isn't for me. Simply not the target audience.
Here's a somewhat dated but still good overview of various approaches to generics in different languages including C++, Rust, Swift, and Zig and their tradeoffs: https://thume.ca/2019/07/14/a-tour-of-metaprogramming-models...
For all the C++ laughing in this thread, there's really only one thing that makes C++ slow - non-`extern` templates - and C++ gives you a lot more space to speed them up than Rust does.
As for templates, I can't think of anything about them that would speed up things substantially wrt Rust aside from extern template and manually managing your instantiations in separate .cpp files. Since otherwise it's fundamentally the same problem - recompiling the same code over and over again because it's parametrized with different types every time.
Indeed, out of the box I would actually expect C++ to do worse because a C++ header template has potentially different environment in every translation unit in which that header is included, so without precompiled headers the compiler pretty much has to assume the worst...