Two comments:
- LLVM IR is actually remarkably stable these days. I was able to rebase Fil-C from llvm 17 to 20 in a single day of work. In other projects I’ve maintained a LLVM pass that worked across multiple llvm versions and it was straightforward to do.
- LICM register pressure is a big issue especially when the source isn’t C or C++. I don’t think the problem here is necessarily licm. It might be that regalloc needs to be taught to rematerialize
It knows how to rematerialize, and has for a long time, but the backend is generally more local/has less visibility than the optimizer. This causes it to struggle to consistently undo bad decisions LICM may have made.
That's very cool, I didn't realize that.
> but the backend is generally more local/has less visibility than the optimizer
I don't really buy that. It's operating on SSA, so it has exactly the same view as LICM in practice (to my knowledge LICM doesn't cross function boundary).
LICM can't possibly know the cost of hoisting. Regalloc does have decent visibility into cost. Hence why this feels like a regalloc remat problem to me
LICM is called with runOnLoop() but is called after function inlining. Inlining enlarges functions, possibly revealing more invariants.
In the context of this thread, your observation is not meaningful. The point is: LICM doesn't cross function boundary and neither does regalloc, so LICM has no greater scope than regalloc.
I'm by no means an LLVM expert but my take away from when I played with it a couple of years ago was that it is more like the union of different languages. Every tool and component in the LLVM universe had its own set of rules and requirements for the LLVM IR that it understands. The IR is more like a common vocabulary than a common language.
My bewilderment about LLVM IR not being stable between versions had given way to understanding that this freedom was necessary.
Do you think I misunderstood?
No. Here are two good ways to think about it:
1. It's the C programming language represented as SSA form and with some of the UB in the C spec given a strict definition.
2. It's a low level representation. It's suitable for lowering other languages to. Theoretically, you could lower anything to it since it's Turing-complete. Practically, it's only suitable for lowering sufficiently statically-typed languages to it.
> Every tool and component in the LLVM universe had its own set of rules and requirements for the LLVM IR that it understands.
Definitely not. All of those tools have a shared understanding of what happens when LLVM executes on a particular target and data layout.
The only flexibility is that you're allowed to alter some of the semantics on a per-target and per-datalayout basis. Targets have limited power to change semantics (for example, they cannot change what "add" means). Data layout is its own IR, and that IR has its own semantics - and everything that deals with LLVM IR has to deal with the data layout "IR" and has to understand it the same way.
> My bewilderment about LLVM IR not being stable between versions had given way to understanding that this freedom was necessary.
Not parsing this statement very well, but bottom line: LLVM IR is remarkably stable because of Hyrum's law within the LLVM project's repository. There's a TON of code in LLVM that deals with LLVM IR. So, it's super hard to change even the smallest things about how LLVM IR works or what it means, because any such change would surely break at least one of the many things in the LLVM project's repo.
This is becoming steadily less true over time, as LLVM IR is growing somewhat more divorced from C/C++, but that's probably a good way to start thinking about it if you're comfortable with C's corner case semantics.
(In terms of frontends, I've seen "Rust needs/wants this" as much as Clang these days, and Flang and Julia are also pretty relevant for some things.)
There's currently a working group in LLVM on building better, LLVM-based semantics, and the current topic du jour of that WG is a byte type proposal.
First of all, you're right. I'm going to reply with amusing pedantry but I'm not really disagreeing
I feel like in some ways LLVM is becoming more like C-in-SSA...
> and the current topic du jour of that WG is a byte type proposal.
That's a case of becoming more like C! C has pointer provenance and the idea that byte copies can copy "more" than just the 8 bits, somehow.
(The C provenance proposal may be in a state where it's not officially part of the spec - I'm not sure exactly - but it's effectively part of the language in the sense that a lot of us already consider it to be part of the language.)
I'd have to double-check, but my recollection is that the current TS doesn't actually require that you be able to implement user-written memcpy, rather it's just something that the authors threw their hands up and said "we hope compilers support this, but we can't specify how." In that sense, byte type is going beyond what C does.
That's my understanding too
> I'd have to double-check, but my recollection is that the current TS doesn't actually require that you be able to implement user-written memcpy, rather it's just something that the authors threw their hands up and said "we hope compilers support this, but we can't specify how."
That's also my understanding
> In that sense, byte type is going beyond what C does.
I disagree, but only because I probably define "C" differently than you.
"C", to me, isn't what the spec describes. If you define "C" as what the spec describes, then almost zero C programs are "C". (Source: in the process of making Fil-C, I experimented with various points on the spectrum here and have high confidence that to compile any real C program you need to go far beyond what the spec promises.)
To me, when we say "C", we are really talking about:
- What real C programs expect to happen.
- What real C compilers (like LLVM) make happen.
In that sense, the byte type is a case of LLVM hardening the guarantee that it already makes to real C programs.
So, LLVM having a byte type is a necessary component of LLVM supporting C-as-everyone-practically-it.
Also, I would guess that we wouldn't be talking about the byte type if it wasn't for C. Type safe languages with well-defined semantics have no need for allowing the user to write a byte-copy loop that does the right thing if it copies data of arbitrary type
(Please correct me if I'm wrong, this is fun)
What would be neat is to expose all right knobs and levers so that frontend writers can benchmark a number of possibilities and choose the right values.
I can understand this is easier said than done of course.
The reason to couple it to regalloc is that you only want to remat if it saves you a spill
Admittedly, this comes up more often in non-CPU backends.
Can you give an example?
Also loads and stores and function calls, but that's a bit finicky to tune. We usually tell people to update their programs when this is needed.
While this is literally "rematerialization", it's such a different case of remat from what I'm talking about that it should be a different phase. It's optimizing for a different goal.
Also feels very GPU specific. So I'd imagine this being a pass you only add to the pipeline if you know you're targeting a GPU.
> Also loads and stores and function calls, but that's a bit finicky to tune. We usually tell people to update their programs when this is needed.
This also feels like it's gotta be GPU specific.
No chance that doing this on a CPU would be a speed-up unless it saved you reg pressure.
I love LLVM though. clang-tidy, ASAN, UBSAN, LSAN, MSAN, and TSAN are AMAZING. If you are coding C and C++ and NOT using clang-tidy, you are doing it wrong.
My biggest problem with LLVM rn is that -fbounds-safety is only available on Xcode/AppleClang and not LLVM Clang. MSAN and LSAN are only available on LLVM and not Xcode/AppleClang. Also Xcode doesn't ship clang-tidy, clang-format, or llvm-symbolizer. It's kind of a mess on macOS rn. I basically rolled my own darwin LLVM for LSAN and clang-tidy support.
The situation on Linux is even weirder. RHEL doesn't ship libcxx, but Fedora does ship it. No distro has libcxx instrumented for MSAN at the moment which means rolling your own.
What would be amazing is if some distro would just ship native LLVM with all the things working out of the box. Fedora is really close right now, but I still have to build compiler-rt manually for MSAN support..
Build time wasn’t great, but it was tolerable, so long as you reduced link parallelism to squeeze inside the memory constraints.
Is it still possible to compile LLVM on such a machine, or is 8Gb no longer workable at all?
If you get “credit” for contributing when you review, maybe people (and even employers, though that is perhaps less likely) would find doing reviews to be more valuable.
Not sure what that looks like; maybe whatever shows up in GitHub is already enough.
If you're looking for stability in practice: the ORC LLJIT API is your best bet at the moment (or sticking to MCJIT until it's removed).
I remember part of the selling point of LLVM during its early stage was compilation time being so much faster than GCC.
LLVM started about 15 years after GCC. Considering LLVM is 23 years old already. I wonder if something new again will pop up.
Discussion: https://news.ycombinator.com/item?id=45072481
There are also codegen projects that don't use LLVM IR that are faster like Cranelift: https://github.com/bytecodealliance/wasmtime/tree/main/crane...
This certainly varies across different parts of llvm-project. In flang, there's very much a "long tail". 80% of its 654K lines are attributed to the 17 contributors responsible for 1% or more of them, according to "git blame", out of 355 total.
LLVM of course has plenty of contributors that only ever landed one change, but the thing that matters for project health is that that the group of "top contributors" is fairly large.
(And yes, this does differ by subproject, e.g. lld is an example of a subproject where one contributor is more active than everyone else combined.)
We miss you!
Part of the reason I'm not ready to go all in on Rust is that I'm not willing to externalize that much complexity in the programs I make.
Optimizing compilers are basically impossible to audit, but there are tools like alive2 for checking them.
That would require the LLVM devs to be stupid and/or evil. As that is not the case, your supposition is not true either. They might be willing to accept churn in the service of other goals, but they don't have churn as a goal unto itself.
For starters the tooling would be much slower if it required LLVM.
I think writing a compiler targeting machine code from scratch only really makes sense if you have Google's resources, as Go did. That includes both the money and the talent pool of employees that can be assigned to work on the task full-time; not everyone has Ken Thompson lying around on payroll. To do better than LLVM is a herculean feat, and most languages will never be mainstream enough to justify the undertaking; indeed I think an undertaking of that scale would prevent a language from ever getting far enough along to attract users/contributors if it doesn't already have powerful backing from day 0.
Also, compiled languages want accurate and rich debug info. All of that information would be lost.