I was aware of TinyGo, which allows compiling Go programs via LLVM (and targeting Wasm, for example). They have a very tiny footprint (programs could even run on the browser) https://tinygo.org/
But this approach is very interesting. I wonder how much compatible Goiaba is with Go vs TinyGo https://tinygo.org/docs/reference/lang-support/stdlib/
If the Go linker was twice as fast, that would be a minor convenience, sometimes.
I wouldn't expect much more that twice, maybe thrice at the very outside. And it'd be a long journey to get there with bugs and such to work through. The blow-your-socks-off improvements come from when you start with scripting languages. Go may be among the slower compiled languages, but it's still a compiled language with performance in the compiled-language class; there's not a factor of 10 or 20 sitting on the table.
But having another implementation could be useful on its own merits. I haven't heard much about gccgo lately, though the project [1] seems to be getting commits still. A highly-compatible Go compiler that also did a lot of compile-time optimizations, the sort of code that may be more fun and somewhat more safe to write in Rust (though I would perceive that the challenge of such code is for the optimizations themselves to be correct rather than the optimization process not crashing, and Rust's ability to help with that is marginal). The resulting compiler would be slower but might be able to create much faster executables.
What significant opportunities exist for performance with a Rust implementation that aren't possible in Go?
Compilation speed is not something I worry about in Go, versus Rust, which I seldom bother with nowadays, compilation speed being one of the reasons.
Debug build take a bit longer (a few seconds) on the desktop, while still staying below a minute on the laptop (remember, I'm talking about a 12 years old Clevo laptop, not a recent Macbook). It's definitely not worse than Typescript compilation or even Javascript bundling, yet we pretty much never hear complains about how typescript has too big compile times.
Yes, it could be faster with a different compiler architecture, especially on clean release builds and that would be nice, but it's a very minor annoyance (I don't do a full release build unless I've updated my compiler version, which only happens a few times a year).
The contrast between the discourse and my day-to-day experience on near obsolete hardware is very striking.
(Compilation artifact eating up hundreds of GB of my hard drive are a much, much bigger nuisance in practice, yet nobody seem to talk about that here on HN).
That's probably part of the difference. I do tens of these every single day.
GUI apps can be quite slow in debug mode, and as you say, the compilation artifacts build up quickly, which requires a cargo clean and then a fresh build.
Tens of clean builds? I'm very curious: why? (because obviously that puts you in a completely different situation compared to someone who can rely on incremental builds)
> GUI apps can be quite slow in debug mode
Full debug mode, definitely, but in that case I've always found that building the dependencies in release mode was enough, but YMMV. But then that's what incremental rebuild are about.
> and as you say, the compilation artifacts build up quickly, which requires a cargo clean and then a fresh build.
I've mostly experienced the PITA when working with multiple code bases over time or in parallel, but surely it doesn't happen every day, let alone multiple times per day, does it?
It's partly a privilege of being able to. I have an MacBook M1 Pro machine with 10 cores, so clean release builds are tolerable. The slowest project to compile I work on regularly in Servo and I can do a clean release build of that in 3-4 minutes. Most of the other projects I work on it's more 30s to 2m max.
It's also a disk space thing. Between working on multiple different projects (I have 200 projects in total in my "open source repos" directory, most of those I only interact with very occasionally, but 5-10 in a day wouldn't be particuarly unusual for me) and switching between branches within projects I can build up 10s of GBs of data in the target dir within a few hours. And I don't have the largest SSD, so that can be a problem! So it's become habit to cargo clean reasonably regularly.
Finally, sometimes I am explicitly testing compile time performance (which requires a clean build each time) or binary size (which involves using additional cargo profiles, exacerbating the disk space issues).
https://github.com/pjmlp/gwc-rs
Maybe nowadays it is faster, I have not bothered since I made the RIR exercise.
Get the community editions of Delphi, FreePascal, or D and see what a fast build means.
Better yet, take the lastest version of Turbo Pascal for MS-DOS, meaning 7, and try it out on FreeDOS.
Clean builds are slow indeed. But they are also once every six week at most if switch to the latest compiler at every release.
> Get the community editions of Delphi, FreePascal, or D and see what a fast build means.
Honestly, who cares about the difference between 1s vs 100ms vs 10ms for a build though? Rust compilation isn't optimal by any means, and it wouldn't have been workable at all in the 90s, but computers are so fast today (even 13-years old computers) it rarely matters in practice IMHO.
As do many of us, as we know how fast builds can be with complex languages, e.g. add OCaml to the list of faster than Rust compiler toolchains, while having a ML type system.
I definitely do. Not necessarily because of the 10ms vs 1s. But because of the later stage when it becomes 600ms vs 60s.
What later stage though, as I said I worked with big code bases on old hardware without issues.
I'm simply not convinced that there exist a situation where incremental rebuild of the crate you're working on is going to take 60s, at all, especially if you're using hardware from this decade.
The most recent Rust version ships with `lld` so it shouldn't be the case anymore (afaik `lld` is a bit slower than the `mold` linker, but it's close, much closer than the system linker that was previously being used by default).
(Not affiliated with the project. Just switched to it and never looked back.)
I'd be thrilled to have it build in 300ms.
(Using a macbook pro 2019)
Wait, aren't Go builds supposed to be fast?
There's no “big tutorial” though. There's a section about compilation time performance[1] but it's arguably not “big”, and the most impactful parts of it is about linking time, not compilation time. And half of the section is now obsolete since rust uses `lld` by default.
[1] https://bevy.org/learn/quick-start/getting-started/setup/#en...
Edit: oh I get it you probably meant “where lld is set as default ” which is currently Linux only.
Lld is supported by the other platforms though, so you can just copy-paste the three lines of configuration given on the Bevy page and call it a day.
> Maybe nowadays it is faster, I have not bothered since I made the RIR exercise.
Took me 18 seconds on a M4 Pro.
Please stop spreading FUD about Rust. Compile times are much better now then what they were and are constantly improving. Maybe it will never be as fast as one of those old languages that you like that nobody uses anymore but it's plenty usable.
I would gladly take one.
And the Roc team as well, maybe they would revert back their decision on moving away from Rust to Zig due to compile times.
> I would gladly take one.
Do you have 10 year old netbooks to give to everyone? because this seems to be required to have slow compile times in Rust.
> And the Roc team as well, maybe they would revert back their decision on moving away from Rust to Zig due to compile times.
More cherry picked examples, you sure love those.
Like whats the point of bringing this up? Do you want me to show you the thousands of software projects that do use rust as a counter example?
Obviously no programming language is one size fits all.
Unfortunately not all of us have an economical situation that allow us to sponsor Trump gifts every couple of years.
How many of those thousands of software projects that do use Rust, can be show as counter example to slow compilation times on hardware that common people usually buy and keep around?
Especially in those countries that are outside tier 1 in world economy, getting computers from whatever parts western no longer considers usable for their daily tasks.
Maybe they can afford to wait.
M4 pro isn't your average computer though.
But as I said, clean builds aren't the most common experience either.
It is also not normal to expect people to spend 2 000 euros to enjoy fast compilation times, when other programming languages require cheaper budgets with faster compilation times, since MS-DOS on lousy hardware from today's standards.
You don't care, other people's do, and who cares most drives adoption.
The production (clang backend) parallel build of V language takes about 3.2 seconds. All on an m1 mac. Even the go compiler seems slow in comparison.
Is that really relevant, though? A compiler written in Rust is unlikely to be that much faster than a compiler written in Go. Most users might not notice a tiny difference in build times.
https://github.com/golang/go/issues/73608
Sounds like they want to maybe include https://github.com/usbarmory/tamago in the compiler.
Seems like effort would be better towards improving rust compilation speed. Unless you just wanted to create a compiler for learning or HN points which here ya go.
Unless writing compilers, linkers, assemblers, a GC runtime is no longer considered systems programming.