I got a good chuckle out of my coworkers but as I've gotten more experienced there's a grain of truth in that. Actually the main non-joking objection was that deployments are slow so if we needed to, reconfiguring runtime flags was a good amount faster. But that made me think: if deployments and builds are super fast, would we still need configuration in the conventional yaml and friends sense? It could save a lot of foot guns from misconfiguration and practically speaking, we essentially never restarted the server to reconfigure it. From the article,
> Unless you have really good reason to, you probably should not do this for normal options - it makes your library much more rigid by requiring that the user of your library know the options at comptime.
I dunno, actually that sounds really great to me.
Why parse markup and generate objects on the fly when there is exactly one UI that will ever be burned into the CNC machine's firmware? Why even bother with an object library that instantiates components at runtime when you know upfront exactly which components will be instantiated and all of the interactions that can occur among them?
At the time, I filed the idea under "C++ wizard has too much fun with metaprogramming," but he was probably on to something.
Another way to think about the idea is "let's invent a new programming language that allows us to express a single UI, and the output of the compiler will be a native program that IS an optimized implementation of that UI."
A small aside about a personal theory about language design: Every new major language feature like templates or whatever gets "tacked on" without really redoing the fundamentals of how the language works. As in: you could remove it and things would still be fine. For example, you can use C without the preprocessor, it's just a bit clunky. Then, later, sometimes much later a language comes along that really leans into the feature to the point that it can no longer be removed. It becomes fundamental.
The ultimate metaprogramming capability would be to have the compiler phases exposed to the programmer. That is, the compiler would no longer be a binary black box into which text is fed and binary pops out. Instead, the compiler and its phases would be "just" the standard library.
Rust started down this path but the designers seemed to shy away from fully committing. Zig is closer still to this idealised vision, but still isn't 100% there.
Ideally, one should be able to control every part of code generation with code, including C# style "source generators", Zig-style comptime, custom optimisation passes or extensions, custom code-gen, etc...
In a system like this, a single GUI framework could be used to either statically or dynamically generate UI elements, with templating code being run either at comptime or runtime depending on attributes similar to passing a value by copy or by reference.
Look at it this way: We're perfectly happy writing code to generate code. We do it all the time! As long as it is HTML or JavaScript and sent over the wire...
The issue with this approach is that the more you can control, the less the compiler can assume. This in turn means that it can check less for you, and tools become harder to write because code analysis often heavily realies on those assumptions. Just to make an example, Zig doesn't (and with the current approach can't) have declaration checked generics.
> In a system like this, a single GUI framework could be used to either statically or dynamically generate UI elements, with templating code being run either at comptime or runtime
I feel like this is overly optimistic. Some things will always be runtime-only, even some very basic ones like allocating heap memory. You can likely sidestep this issue and still precompute a lot at compile time, but then chances are this way of computing will be less efficient at runtime. In the end you'll likely still end up with different code for comptime and runtime just because of specific optimizations.
That's absolutely true, but there's a workaround, albeit a complicated one. The compiler internals need the same kind of constraints or traits that abstract code such as language interfaces or template parameters can have. These can then be used to constrain the internals in a way that then would allow assumptions to be safely "plumbed through" the various layers. The (big!) challenge here is that these abstractions haven't been well-developed in the industry. Certainly nowhere near as well as the typical runtime "type theory" as seen in modern languages.
> some very basic ones like allocating heap memory.
Well... this is sort-of my point! For example, what's the fundamental difference between allocating memory in some heap[1] structure at runtime and a compiler allocating members in a struct/record/class for optimal packing?
IMHO, not much.
E.g.: Watch this talk by Andrei Alexandrescu titled "std::allocator Is to Allocation what std::vector Is to Vexation": https://www.youtube.com/watch?v=LIb3L4vKZ7U
It really opened my eyes to how one could very elegantly make a very complex and high-performance heap allocator from trivial parts composed at compile-time.
There's no reason that a nearly identical abstraction couldn't also be used to efficiently "bin pack" variables into a struct. E.g.: accounting for alignment, collecting like-sized items into contiguous sections, extra padding for "lock" objects to prevent cache issues, etc...
This is what my dream is: that a struct might just be treated as a sort-of comptime heap without deletions. Or even with deletions, allowing fun stuff like type algebra that supports division. I.e.: The SELECT or PROJECT-AWAY operators!
There was some experimental work done in the Jai language to support this kind of thing, allowing layouts such as structure-of-arrays or arrays-of-structures to be defined in code but natively implemented by the compiler as-if it was a built-in capability.
[1] Not really a traditional heap in most language runtimes these days. Typically a combination of various different allocators specialised for small, medium, and large allocations.
PS: The biggest issue I'm aware of with ideas like mine is that tab-complete and IDE assistance becomes very difficult to implement. On the other hand, keeping the full compiler running and directly implementing the LSP can help mitigate this... to a degree. Unsolved problems definitely remain!
There are many fundamental differences! For example at compile time you can't know what the address of some data will be at runtime due to stuff like ASLR, nor can you know the actual `malloc` implementation that will be used since that might be dynamically loaded.
Of course this does not prevent you from trying to fake heap allocations at comptime, but this will have various issues or limitations depending on how you fake them.
Conceptually, an abstract allocator in the style proposed by Andrei can be set up to just return an offset
This offset can then later be interpreted as "from the start of the struct" or "from the start of memory identified by the pointer to 0".
Fundamentally it's the same thing: take a contiguous (or not!) space of bytes and "carve it up" using some simple algorithm. Then, compose the simple algorithms to make complicated allocators. How you use this later is up to you.
I guarantee you that there's code just like this in the guts of any compiler that can reorder struct/record members, such as the Rust compiler. It might be spaghetti code [1], but it could instead look just like Andrei's beautiful component-based code!
I think it ought to be possible for developers to plug in something as complex as a CP-SAT solver if they want to. It might squeeze out 5% performance, which could be worth millions at the scale of a FAANG!
[1] Eww: https://doc.rust-lang.org/beta/nightly-rustc/src/rustc_middl...
The issue is, how do you create such an offset, at comptime, such that you can interpet it as "from the start of memory identified by the pointer to 0" at runtime. Because at runtime you'll most often want to do the latter (and surely you don't want to branch on whether it's one or the other), but creating such offsets is the part that's very tricky if not impossible to do properly at comptime.
> I guarantee you that there's code just like this in the guts of any compiler that can reorder struct/record members
How is that relevant though? Ok, the code might look similar, but that's not the problematic part.
So, a lot of runtime stuff is papering over the fact that your submission to Apple takes too long to resolve.
There are a lot of pitfalls with this approach, but for a subset of problems it is very good.
For the latter, it is more clear. The developer develops the code, the user changes the configuration
Of course that sometimes needs some kind of feature flags for bigger changes, which is a configuration option too, but at least the stable state of the code is simpler and not a nest of code + config that never really changes.