it's always surprising for me how absurdly efficient "highly specialized VM/instruction interpreters" are
like e.g. two independent research projects into how to have better (fast, more compact) serialization in rust ended up with something like a VM/interpreter for serialization instructions leading to both higher performance and more compact code size while still being cable of supporting similar feature sets as serde(1)
(in general monomorphisation and double dispatch (e.g. serde) can bring you very far, but the best approach is like always not the extrem. Neither allays monomorphisation nor dynamic dispatch but a balance between taking advantage of the strength of both. And specialized mini VMs are in a certain way an extra flexible form of dynamic dispatch.)
---
(1): More compact code size on normal to large project, not necessary on micro projects as the "fixed overhead" is often slightly larger while the per serialization type/protocol overhead can be smaller.
(1b): They have been experimental research project, not sure if any of them got published to GitHub, non are suited for usage in production or similar.
You're adding a layer of abstraction and indirection, so how is it possible that a more indirect solution can have better performance?
This seems counterintuitive, so I googled it. Apparently, it boils down to instruction cache efficiency and branch prediction, largely. The best content I could find was this post, as well as some scattered comments from Mike Pall of LuaJIT fame:
https://sillycross.github.io/2022/11/22/2022-11-22/
Interestingly, this is also discussed on a similar blogpost about using Clang's recent-ish [[musttail]] tailcall attribute to improve C++ JSON parsing performance:
https://blog.reverberate.org/2021/04/21/musttail-efficient-i...
It is funny, but (like I’ve already mentioned[1] a few months ago) for serialization(-adjacent) formats in particular the preferential position of bytecode interpreters has been rediscovered again and again.
The earliest example I know about is Microsoft’s MIDL, which started off generating C code for NDR un/marshalling but very soon (ca. 1995) switched to bytecode programs (which Microsoft for some reason called “format strings”; these days there’s also typelib marshalling and WinRT metadata-driven marshalling, the latter completely undocumented, but both data-driven). Bellard’s nonfree ffasn1 also (seemingly) uses bytecode, unlike the main FOSS implementations of ASN.1. Protocol Buffers started off with codegen (burying Google user in de/serialization code) but UPB uses “table-driven”, i.e. bytecode, parsing[2].
The most interesting chapter in this long history is in my opinion Swift’s bytecode-based value witnesses[3,4]. Swift (uniquely) has support for ABI compatibility with polymorphic value types, so e.g. you can have a field in the middle of your struct whose size and alignment only become known at dynamic linking time. It does this in pretty much the way you expect[5] (and the same way IBM’s SOM did inheritance across ABI boundaries decades ago): each type has a vtable (“value witness”) full of compiler-generated methods like size, alignment, copy, move, etc., which for polymorphic type instances will call the type arguments’ witness methods and compute on the results. Anyways, here too the story is that they started with native codegen, got buried under the generated code, and switched to bytecode instead. (I wonder—are they going to PGO and JIT next, like hyperpb[6] for Protobuf? Also, bytecode-based serde when?)
[1] https://news.ycombinator.com/item?id=44665671, I’m too lazy to copy over the links so refer there for the missing references.
[2] https://news.ycombinator.com/item?id=44664592 and parent’s second link.
[3] https://forums.swift.org/t/sr-14273-byte-code-based-value-wi...
[4] Rexin, “Compact value witnesses in Swift”, 2023 LLVM Dev. Mtg., https://www.youtube.com/watch?v=hjgDwdGJIhI
[5] Pestov, McCall, “Implementing Swift generics”, 2017 LLVM Dev. Mtg., https://www.youtube.com/watch?v=ctS8FzqcRug
Tail recursion opens up for people to write really really neat looping facilities using macros.
Rust has been really good at providing ergonomic support for features we're too used to seeing provided as "Experts only" features with correspondingly poor UX.
My fear is that adding yet another keyword it might get lost in the sea of keywords that a Rust developer needs to remember. And if recursion is not something you do often you might not reach for it when actually needed. Having this signal in the function signature means that people would be exposed to it just by reading the documentation and eventually will learn it exists and (hopefully) how to wield it.
What do you folks think?
`become blah(foo, bar)` is the same thing as `blah(foo, bar)` except that we, the caller are promising that we have nothing further to do and so when blah returns it can return to our caller.
If somebody else calls blah they don't want that behaviour, they might have lots more to do and if they were skipped over that's a disaster.
In some languages it's very obvious when you're going to get TCO anyway, but Rust has what C++ calls RAII, when a function ends all the local variables get destroyed and this may be non-trivial work. Presumably destroying a local i32 is trivial & so is a [u8; 32] but destroying a local String isn't, let alone a HashMap and who knows how complicated it is to destroy a File or a DBQuery or a Foo...
So in a sense "all" become does is try to bring that destruction sooner a little, so it happens before the call, leaving nothing to do afterwards. We weren't using that String any more anyway, lets just destroy it first, and the HashMap? Sure, and oh... no, actually if we destroy that Foo before calling blah which needs the Foo that messes things up... Rust's borrowck comes in clutch here to help us avoid a terrible mistake, our code was nonsense, it doesn't build.
Edited: Improve explanation
> Last week, I wrote a tail-call interpreter using the become keyword, which was recently added to nightly Rust (seven months ago is recent, right?).
The "become" keyword allows us to express our meaning, we want the tail call, and, duh, of course the compiler will optimize that if it can be a tail call but also now the compiler is authorized to say "Sorry Dave, that's not possible" rather than grow the stack. Most often you wrote something silly. "Oh, the debug logging happens after the call, that's never going to work, I will shuffle things around".
Not really, a trampoline could emulate them effectively where the stack won't keep growing at the cost of a function call for every opcode dispatch. Tail calls just optimize out this dispatch loop (or tail call back to the trampoline, however you want to set it up).
> Tail calls can be implemented without adding a new stack frame to the call stack. Most of the frame of the current procedure is no longer needed, and can be replaced by the frame of the tail call, modified as appropriate (similar to overlay for processes, but for function calls). The program can then jump to the called subroutine. Producing such code instead of a standard call sequence is called tail-call elimination or tail-call optimization. (https://en.wikipedia.org/wiki/Tail_call)