200 pointsby mfiguiere6 days ago11 comments
  • Voultapher5 days ago
    Love to see negative results published, so so important.

    Please let's all go towards research procedure that enforces the submission of the hypothesis before any research is allowed to commence and includes enforced publishing regardless of results.

    • adrian_b4 days ago
      When I was a young student, hearing all the marketing talk from various companies about all their valuable intellectual property that is supposedly incorporated in their products and about their valuable trade secrets that are supposedly guarded from their competitors, I thought that when I will start working at such a company I would learn a lot of useful things, much above what I was learning as a student.

      However, after working at many companies, big and small, I was disappointed to find out that my expectations had been naive. In no such company have I seen any useful secret. There has been only one case when I have thought at first that I have learned something not widely known, but then, through a search through the older literature, I have found that fact published in an old research paper.

      The only really useful information that I have found at every such workplace in a successful company, was the know-how about a long list of engineering solutions that I could think of when confronted with solving a new problem, but which were known by the experienced staff as dead ends, which had been tried by them, but for various reasons were not acceptable solutions.

      The know-how about such solutions that do not work and especially why they do not work, was much more valuable than what was officially considered as intellectual property, e.g. patents or copyrights.

      • Voultapher2 days ago
        Thanks for sharing this fascinating insight.

        I'd expect even if we were to move towards preregistration widely, that this situation would remain to some degree. Because universities lack the resources, pressure and time needed to turn a novel idea into a commercial product. As seen with battery research, being good at one thing is not enough, the solution needs to be bad at nearly nothing to compete with li-ion. In my experience some seemingly solvable roadblocks can turn into showstoppers very late and some showstoppers were not anyones radar whiling conceptualizing the solution.

    • djoldman5 days ago
      Huge upvote from me as well. Think of all the folks out there who have this idea and instead of searching for it, finding nothing, and implementing, now they can either move on or try to fiddle with this works' output.
    • RNGesus835 days ago
      > Love to see negative results published, so so important. > > Please let's all go towards research procedure that enforces the submission of the hypothesis before any research is allowed to commence and includes enforced publishing regardless of results.

      Grounded theory? https://en.m.wikipedia.org/wiki/Grounded_theory

    • api5 days ago
      I've never heard that idea before, and it's so obvious. All science should be done this way.

      It kind of does happen in areas of science that are capital intensive like space, high energy physics, etc., because people hear about what is to be done before it is done, but it's not formalized. It should be, and it should be done with everything.

      • Voultapher5 days ago
        If we are talking reforms to science procedure, I'd also love to see 30% or so of the research funds locked away, to be then given to another team ideally at another university that get's access only to the original teams' publication and has the goal to reproduce the study. The vast majority of papers released don't contain enough information to actually repeat their work.
  • pizlonator6 days ago
    I think the missing piece here is that JavaScriptCore (JSC) and other such systems don't just use inline caching to speed up dynamic accesses; they use them as profiling feedback.

    So, anytime you have an IC in interpreter, baseline, or lightly optimized code, then that IC is monitored to see how polymorphic it gets, and that data is fed back into the optimization pipeline.

    Just having an IC as a dead-end, where you don't use it for profiling is way less profitable than having an IC that feeds into profiling.

    • kannanvijayan6 days ago
      Well on dynamic languages the ICs do give a nice order of magnitude speed-up by themselves, since the guard eliminates a whole hashtable (or linear) lookup instead of (in this case) a single memory indirection.

      But yeah - on spidermonkey we found that orienting our ICs towards being stable and easy to work with, as opposed to just being fast, ended up leading to a much better design.

      This is a nice result though. Negative, but good that they published it.

      What would be a good next step is some QEMU-style transformation, pull out basic blocks, profile them for both hotness, and incoming arguments at function starts, and dynamic dispatch targets.. then use that to re-compile the whole thing using method-jit and in particular inlining across call-paths with GVN and DCE applied.

      I kind of expect the results to be very positive, just based on intuition.. but it'd be cool to see how it actually turned out.

      • bjoli6 days ago
        A minor nitpick: ICs don't give that much benefit in monomorphic languages like scheme.
        • kannanvijayan6 days ago
          Apologies if this response seems aggressive - this is just a topic I'm very passionate about :)

          I think technically in languages like scheme, the opportunity would be to optimize other sorts of dispatches. The classic dispatch mechanism in scheme is the "assoc" style list-of-pairs lookup.

          In this case, the "monomorphization" would be extracting runtime information on the common lookups that are taken. This is doable in a language like scheme, but it requires identifying parts of data structures that are less likely to change over time - where it makes sense to lift them up into hidden types and effectively make them "static".

          Imagine if you could designate particular `(list (cons key value) ...)` value as "optimizable" - maybe even with a macro/function call : `(optimizable ((a 1) (b 2) ...))`

          This would build a hidden shape for the association's "backbone" and give you back a shaped assoc list, and then you would be able to optimize all uses of `(assoc ...)` on lists of that kind in the same way you optimize shaped objects.

          A plumbing exposed version of this would just let you do `(let my-shape (make-shape '(prop1 prop2 ...)))` and later `(my-shape '(1 2 ...))` to build the shape-optimized association list.

          It's kind of neat when you realize that almost everything the runtime type-inference regime in a JIT compiler does.. is enable eliding lookups across data structures where we can assume that some part of that data structure is "more static" than other parts.

          In JS that data structure is a linked-list-of-hashtables, where the hashtable keys and the linked list backbone are expected to be stable.

          But the general idea applies to literally any structure you'd want to do lookups across. If you can extract a 'conserved shape', you can apply this optimization.

        • sitkack6 days ago
          Couldn't PICs and monomorphization be seen as duals? They are both solving the problem of how to make polymorphic code have fewer branches.
          • pizlonator6 days ago
            PICs are the core mechanism of monomorphization in the VMs that do it
            • sitkack6 days ago
              I was thinking of static monomorphization as in Rust.
              • pizlonator6 days ago
                But then there isn't a duality.

                If your language is static enough, then static devirt is profitable enough that you can stop there.

                If your language is dynamic enough, then PICs are the main driver of devirt. (Though all PIC-based systems couple that with static analysis and that static analysis is powerful enough that it can sometimes devirt without the PICs' help.)

                • sitkack6 days ago
                  I meant "dual" in the analogous sense, not as strict mathematical duals where one could replace the other. That both are solving devirt from opposite ends. I read @bjoli's comment with the analogous connotation.

                  Your last sentence, would be if Rust used a PIC to optimize calls to dyn Traits?

    • titzer6 days ago
      Indeed, this was literally the conclusion of the first paper that introduced polymorphic inline caches.

      I'll add that the real benefit of ICs isn't just that compiled code is specialized to the seen types, but the fact that deoptimization guards are inserted, which split diamonds in the original general cases so that multiple downstream checks become redundant. So specialization is not just a local simplification but a global simplification to all dominated code in the context of the compilation unit.

    • mintplant6 days ago
      SpiderMonkey actually ditched most of the profiling stuff in favor of transpiling the ICs generated at runtime into the IR used by the optimizing compiler, inlining them into the functions when they're used, and then sending the whole thing through the usual optimization pipeline. The technique is surprisingly effective.

      I don't know what the best reference to link for this would be, but look up "Warp" or "WarpMonkey" if you're interested.

      • kannanvijayan6 days ago
        WarpMonkey doesn't get rid of the profiling stuff - the profiling is inherent in ICs - we keep hitcounts and other information for various paths taken through code (including ICs) and use that to guide compilation later.

        Warp's uniqueness is in how it implements the ICs. The design goal when we built baseline JIT in SpiderMonkey was to split the code and data components of ICs. At the time, we were looking at V8 ICs which where basically compiled code blocks with the relevant parameter data (e.g. pointer to the hidden type to compare against) baked into the code.

        We wanted to segregate the data from the code - e.g. so that all ShapedGetProp ICs can have a data stub with a pointer to their own shape, but share a pointer to the code. Effectively your ICs end up looking like small linked lists of C++ pure virtual objects (without the vtable indirection and just a single code pointer hanging off of the stub).

        Originally the "shared code" was emitted by a bunch of statically defined methods that emitted a fixed bit of assembly (one for each kind of stub). That became unweildy as we added more stubs, so CacheIR was designed. CacheIR was a simple bytecode language that the stubs could express their logic in, which would get compiled down to machine code. The CacheIR bytecode would be a key to the compiled stubcode.

        That let stubs generate arbitrary CacheIR for their logic, but still share code between stubs that emitted the same logic.

        That led to the idea of Warp, where we noticed that one could build the input for an optimized method-jit compiler just by combining the profiling info that stubs produced, and the CacheIR bytecode for those stubs.

        Normally you'd start from bytecode, build an SSA, then do a pass where you apply type information.

        With Warp, the design simplifies into stitching together a bunch of CacheIR chunks which already embed the optimization information you care about, and then compiling that.

        Ultimately it does the same thing as the other JITs, but it goes about it in a really nice and clean way. It kind of expresses some of the ideas that Maxime Boisvert-Chevalier was exploring in their work with basic block versioning.

        • mintplant6 days ago
          Thanks for the more complete explanation!

          > Normally you'd start from bytecode, build an SSA, then do a pass where you apply type information.

          > With Warp, the design simplifies into stitching together a bunch of CacheIR chunks which already embed the optimization information you care about, and then compiling that.

          This is what I meant by ditching most of the profiling stuff; I suppose I should have said "type inference stuff" to be more precise.

          > Originally the "shared code" was emitted by a bunch of statically defined methods that emitted a fixed bit of assembly (one for each kind of stub). That became unweildy as we added more stubs, so CacheIR was designed.

          I remember all too well :) I worked on the first pass at implementing megamorphic caches into the original stub generators that spit out (macro)assembly directly, before we had CacheIR. So much code duplication...

          • kannanvijayan6 days ago
            Ah, sorry for the misinterpretation!

            Also, we may have overlapped on the team :)

      • hinkley6 days ago
        My understanding is that branch prediction got better in the ‘10s and a bunch of techniques that didn’t work before do now.
        • pizlonator6 days ago
          The modern VM technique looks almost exactly like what the original PIC papers talked about in the 90s. There are some details that are different, but I'm not sure that the details come down to exploiting changes in branch prediction efficiency. I think the things that changed come mostly down to the fact that the original PIC paper was a first stab by a small team whereas modern VMs involve decades of engineering by larger teams (so everything that could get more complex as a consequence of tuning did get more complex).

          So, while it's true that microarches changed in a lot of ways, the overall implications for how you build VMs are not so big.

          • hinkley6 days ago
            Are you still using a threaded interpreter main loop? That didn't really come around until the mid 90's and I've been hearing for about ten years now that it's not a clear win anymore due to predictors being able to read through two levels of indirection.
            • pizlonator6 days ago
              The last time I ran the experiment of having a single jump, it was slower than jump-per-opcode-handler.

              It's true that predictors are able to see through multiple levels, but a threaded interpreter gives them plus one level, and that ends up mattering as much as it always did.

            • titzer5 days ago
              Threaded dispatch is absolutely worth it. Wizard's interpreter gets anywhere from 10-35% performance improvement from threaded dispatch.
        • gopalv6 days ago
          > that branch prediction got better in the ‘10s and a bunch of techniques that didn’t work before do now.

          They got better than they had any right to be, but then we found out that Spectre & Meltdown were vulnerabilities rather than optimizations.

          For example, a switch based interpreter was fast as a CGOTO one for a brief period between 2012 and 2018, but suddenly got slower again as the CPUs could no longer rely on branch prediction to do prefetching.

          • titzer5 days ago
            While better predictors allow the speculation window to be larger on average, the the real culprit is that large speculation window. Even if the branch predictor weren't very smart, it will still do well on a program with stable, predictable branches, thus allowing a large speculation window to open up. The vulnerability is that some of those branches guard really important things, like not going out-of-bounds of an array. So a Spectre attack, which works by exploiting a mispredicted branch, is a constructive attack where the gadget is tuned for the branch predictor anyway. The other part of an attack, the windowing gadget, just relies on making a really slow input into a branch. Neither of them would be particularly harder with a dumb predictor.
      • IainIreland6 days ago
        We talk about this a bit in our CacheIR paper. Search for "IonBuilder".

        https://www.mgaudet.ca/s/mplr23main-preprint.pdf

      • pizlonator6 days ago
        It sounds like you're describing something similar to what the other JS VMs do
        • IainIreland6 days ago
          The main thing we're doing differently in SM is that all of our ICs are generated using a simple linear IR (CacheIR), instead of generating machine code directly. For example, a simple monomorphic property access (obj.prop) would be GuardIsObject / GuardShape / LoadSlot. We can then lower that IR directly to MIR for the optimizing compiler.

          It gives us a lot of flexibility in choosing what to guard, without having to worry as much about getting out of sync between the baseline ICs and the optimizer's frontend. To a first approximation, our CacheIR generators are the single source of truth for speculative optimization in SpiderMonkey, and the rest of the engine just mechanically follows their lead.

          There are also some cool tricks you can do when your ICs have associated IR. For example, when calling a method on a superclass, with receivers of a variety of different subclasses, you often end up with a set of ICs that all 1. Guard the different shapes of the receiver objects, 2. Guard the shared shape of the holder object, then 3. Do the call. When we detect that, we can mechanically walk the IR, collect the different receiver shapes, and generate a single stub-folded IC that instead guards against a list of shapes. The cool thing is that stub folding doesn't care whether it's looking at a call IC, or a GetProp IC, or anything else: so long as the only thing that differs is the a single GuardShape, you can make the transformation.

          • pizlonator5 days ago
            > The main thing we're doing differently in SM is that all of our ICs are generated using a simple linear IR (CacheIR)

            JSC calls this PolymorphicAccess. It’s a mini IR with a JIT that tries to emit optimal code based on this IR. Register allocation and everything, just for a very restricted IR.

            It’s been there since I don’t remember when. I wrote it ages ago and then it has evolved into a beast.

            • IainIreland5 days ago
              Taking a quick look at the JSC code, the main difference is that CacheIR is more pervasive and load-bearing. Even monomorphic cases go through CacheIR.

              The main justification for CacheIR isn't that it enables us to do optimizations that can't be done in other ways. It's just a convenient unifying framework.

        • mintplant6 days ago
          This is unique to SpiderMonkey, as far as I'm aware.
    • hinkley6 days ago
      One of the last pieces of really good advice I got before I gave up on writing a programming language myself is that if you instrument the paths that are already expected to be slow, you can get most of the value of instrumentation with a fraction of the cost per call. Because people avoid making the slow calls, and if they don’t the app was going to be slower anyway so why not an extra couple percent? Versus the fast path where the instrumentation may be a quarter or more of runtime.
    • sitkack6 days ago
      The answer is always more feedback. I am excited about DNN powered static profilers. The training data will come from the JIT saving the results of their experiments.
      • mike_hearn4 days ago
        Ask and ye shall receive:

        https://www.sciencedirect.com/science/article/abs/pii/S01641...

        It's XGBoost rather than DNN powered, but that might make sense from a runtime throughput perspective.

      • pizlonator6 days ago
        That's an exciting direction!
        • sitkack6 days ago
          Profile Guided Optimization without Profiles: A Machine Learning Approach

          https://www.semanticscholar.org/paper/Profile-Guided-Optimiz...

          • pizlonator6 days ago
            Very cool!

            I've been thinking about what it would look like for something like this to be done for the profiling that you get from ICs, not the profiling you get from branch weights or basic block counts.

            They're quite different. Two big differences:

            - My best estimate is that speculating on type state (i.e. what you get from ICs) is a value bet only if you're right about 99.9% of the time (or even 99.999% - depends on your compiler/runtime architecture). I think you can get profit from branch weights if they are right less than 99.9% of the time.

            - Speculating on type state means having semantically rich profiling information. It's not just a bunch of numbers. You need the profiler to describe a type to you, like: "I expect this access to see objects with fields x, y, z (in that order) and it has a prototype that has fields a, b, c, which then has a prototype with fields e, f, g".

            • andyayers6 days ago
              For the .NET JIT, at least, speculation on types seems beneficial even if we're only right maybe 30% of the time.

              See eg https://github.com/dotnet/runtime/blob/main/docs/design/core...

              (where this is presented as a puzzle)....

              • pizlonator6 days ago
                Guarded devirtualization is different from the speculation that I'm talking about.

                To me, speculation is where the fail path exits the optimized code.

                To handle JS's dynamism, guarding is usually not worth it (though JSC has the ability to do that, if the profiling says that the fail path is probable). I believe that most of HotSpot's perf comes from speculation rather than guarded devirt.

                • titzer5 days ago
                  > To me, speculation is where the fail path exits the optimized code.

                  V8 is now doing profile-based guarded inlining for Wasm indirect calls. The guards don't deopt, so it's a form of biasing where the fail path does indeed go through the full indirect call. That means the fail path rejoins, and ultimately, downstream, you don't learn anything, e.g. that there were no aliasing side effects, or anything about the return type of the inlined code.

                  You can get some of the effect of speculation with tail duplication after biasing, but in order to get the full effect you'd have to tail-duplicate all the way to the end of a function, or even unroll another iteration of the loop. It's possible to do this if you're willing to spend a lot of code space by duplicating a lot of basic blocks.

                  But the expensive thing about speculation is the deopt path, which is a really expensive OSR transfer and usually throws away optimized code, too. So clearly biasing is a different tradeoff, and I wouldn't be surprised if biasing plus a little bit of tail duplication gets most of the benefit of deoptimization.

                  • sitkack5 days ago
                    Would you mind deep linking to the V8 code that does this?
            • sitkack6 days ago
              Which JIT would be the easiest to implement to log this information? A time series LLM should be able to analyze it and give predictions.

              Looks like PYPY is the most extensible.

              https://rpython.readthedocs.io/en/latest/logging.html

              And that the JIT is rebuilt from rpython, so it is fairly open to extension.

              • pizlonator6 days ago
                I know exactly how I would do that to JavaScriptCore, but that’s maybe mostly due to the fact that I designed most of the bits you’d have to instrument.

                Not sure if it’s the easiest overall.

                I’m easy to look up if you want to pick my brain about JSC

                • sitkack6 days ago
                  What a generous offer. I'll spend some time reading your papers first. Thank you.
  • c-smile5 days ago
    Slightly orthogonal...

    In my Sciter, that uses QuickJS (no JIT), instead of JIT I've added C compiler. That means we can add not just JS modules but C modules too:

       import * as cmod from "./cmodule.c"
    
    Such cmodule will be compiled and executed on the fly into native code. Idea is simple each language is good for specific tasks. JS is flexible and C is performant - just use right tool that is most optimal for a task.

    c-modules play two major roles: FFI and number crunching code execution.

    Sciter uses TCC compiler and runtime.

    In total size of QuickJS + TCC binary bundle 500k + 220k = 720k.

    For the comparison: V8 is of 40mb size.

    https://sciter.com/c-modules-in-sciter/ https://sciter.com/here-we-go/

    • vanderZwan5 days ago
      Interesting project! After clicking around on the website:

      > In almost 10 years, Sciter UI engine has become the secret weapon of success for some of the most prominent antivirus products on the market: Norton Antivirus and Internet Security, Comodo Internet Security, ESET Antivirus, BitDefender Antivirus, and others.

      What an intriguingly specific niche of customer! How come all these different anti-virus companies decided to use your platform?

      • c-smile5 days ago
        > anti-virus companies decided to use your platform?

        One of the reasons: AV application should look modern to give an impression that the app is adequate to modern threats. So while app backend is relatively stable, its UI shall be easily tweakable. CSS/HTML is good for that.

        Check this: https://sciter.com/wp-content/uploads/2018/06/n360.png

        • mathversea day ago
          I actually really love it. Typically AV products UIs feel snappy and lightweight and it is the backend engine that does most of the work and feels horrendously as a bottleneck. Which I think is an interesting phenomena when considering modern desktop applications where typically the backend code does very little and the frontend one is the one being bloated (Electron).

          It's a bit sad that there is not a lot of talk and re-usable components from these companies for Sciter that can help us create snappy apps!

    • pjmlp5 days ago
      Even if I am not a big C fan, the idea is rather cool, it is a bit like having C++ on .NET via C++/CLI.
  • tonnydourado6 days ago
    Tangentially, fuck yeah, negative results, just as good as positive ones
    • rhelz6 days ago
      Amen. This paper is worth more than all of the fraudulent, unreproducible papers we are inundated with, put together and squared.
  • VeejayRampay6 days ago
    the people who came up with this are obviously brilliant but being french myself, I really wonder why no one is proof-reading the english, this gives an overall bad impression of the work imho
    • rhelz6 days ago
      Being a native English speaker I absolutely love reading and listening to speakers of English as a second language. Speaking is actually a subspecies of singing, and it's always cool to hear the same old lyrics remixed to a new melody and a new beat.

      English has no 'correct' way to be written or spoken, nor does it need one, nor would it benefit from one, therefore, nor should it have one.

      Speakers of English as a second language: you are what makes English a great language.

      • davidgay6 days ago
        > English has no 'correct' way to be written or spoken, nor does it need one, nor would it benefit from one, therefore, nor should it have one.

        There may be no 'correct' way, but there are plenty of 'incomprehensible' ways. I once encountered a research paper that had clearly [0] been translated word-for-word from French into English and made no sense until I translated it word-for-word back to French...

        [0]: actually it was only clear after I realised I should attempt the reverse translation ;)

        • rhelz6 days ago
          Sure, but frankly, I've heard plenty of people speaking the most flawless King's English who didn't make any sense at all.

          re: translated math papers: haha we've all been there. Once I had to read a bunch of 70's-era papers from Russian Mathematicians. The translators, bless their hearts, I'm sure knew everything there was to know about Dickens and Dostoevsky, but it was clear they had no clue what the math was all about :-)

          Oh well, Math is the universal language, right? chuckle

      • tredre36 days ago
        That's a beautiful way of seeing things! Unfortunately, as you're well aware I'm sure, most people do not share your idyllic view of polyglots and, for better or worse, they will assume that bad english = bad quality work. And bad doesn't have to mean mistakes. Just an unusual wording is enough to throw the average person off, in my experience.
        • rhelz6 days ago
          I'm not as worried about those who have ears, but don't hear, as I am about the effect LLM's will have on English.

          Grammerly was bad enough. One of my oldest friends is from Transylvania, and he could tell such great stories in his eastern-european accent and cadence. When he collected those stories into a book, he ran everything through grammerly, and the book reads like a soulless newscaster ;-(

          When people start en mass to run their prose through LLM's to "correct" it, English will lose one of its main arteries.

      • vanderZwan5 days ago
        > Speaking is actually a subspecies of singing, and it's always cool to hear the same old lyrics remixed to a new melody and a new beat.

        What a lovely take on this topic! :)

        (does this imply you're a fellow believer in the hypothesis that singing evolved before language?)

        • rhelz4 days ago
          Ha, I don't know anything at all about how language evolved. But, when you listen to somebody speaking--if you can bracket the meaning (which tends to soak up all our conscious attention)--you can hear the rhythm and you can hear the melodies. You can hear the music.

          In order to understand somebody who speaks English in a different enough dialect, you have to really listen to the rhythm and melody--in order to puzzle out the meanings. The meanings are not hitting you in the face, they are more coy, and you have to seek them out while listening to songs you've never heard before!

          Same goes with speaking with somebody who speaks English as a second language. You can hear the music in a way which is hard to do when listening to native speakers. Not impossible--once you realize what is happening, you can learn to pay attention to it.

          But think about all the different ways you've hear English spoken...French accents, Nigerian accents, German accents, Russian accents, north Indian and south Indian accents, Mexican accents,.....It's like turning into a radio station playing the music of the world.

          And unless they all were taking the time to learn English, we would not be hearing their music. And we would not be able to avail ourselves of an inexhaustible supply of new idioms, new ways of emphasizing, new ways of conveying subtle emotional cues...

    • indolering6 days ago
      It's a preprint.
  • tsunego6 days ago
    chasing inline cache micro-optimizations with dynamic binary modification is a dead end. modern CPUs are laughing at our outdated compiler tricks. maybe it's time to accept that clever hacks won’t outrun silicon.
    • saagarjha5 days ago
      JITs typically are too broken for compiler tricks so I don't think it's time to accept that just yet.
    • andrekandre6 days ago
      what is the better approach?
      • Sparkyte6 days ago
        You don't, there are equal trade offs. JIT might use more memory because of what it does at the runtime, but it is also the exact reason it is faster to start. A good trade off is just using the type of languages best suited for the workload.
  • 6 days ago
    undefined
  • ErikCorry4 days ago
    It's good that they post negative results, but it's hard to know exactly why their attempt failed, and it's tempting for me to make guesses without doing any measurements, so let me fall for that temptation:

    They are patching inline-cache sites in an AOT binary and not seeing improvements.

    Only 17% of the inline-cache sites could be optimized to what they call O2 level (listing 7). Most could only be optimized to O1 level (listing 6). The only difference from the baseline (listing 5) to O1 is that they replaced:

    mov 0x101c(%rip), %rax # load the offset

    with

    mov 0x3, %rax # load the offset

    I'm not very suprised that this did not help much. The old load is probably hoisted up and loaded into a renamed register very early, and it won't miss in the cache.

    Basically they already have a pretty nice inline cache system at least for the monomorphic case, and messing with the exact instructions used to implement it doesn't help much. A JIT is able to do so much more, eg polymorphic cases, inlining of simple methods, and eliminating repeated checks of the same hidden class. Not to mention detecting at runtime that some unknown object is almost always an integer or a float and JITting code specialized for that.

    People new to virtual machines often focus on the compiler, whereas the stuff that moves the needle is often around the runtime. How tagged and typed data is represented, the GC implementation, and the object layout. Eg this paper explores an interesting new tagging technique and makes a huge difference to performance (there's some author overlap): https://www.researchgate.net/figure/The-three-representation...

    Incidentally the assembly syntax in the "Attempt to catch up" article is a bit confusing. It looks like the IC addresses are very close to the code, like almost on the same page. Stack overflow explains it:

    GAS syntax for RIP-relative addressing looks like symbol + current_address (RIP), but it actually means symbol with respect to RIP.

    There's an inconsistency with numeric literals:

    [rip + 10] or AT&T 10(%rip) means 10 bytes past the end of this instruction

    [rip + a] or AT&T a(%rip) means to calculate a rel32 displacement to reach a, not RIP + symbol value. (The GAS manual documents this special interpretation)

  • mannyv6 days ago
    [flagged]
    • mannyv6 days ago
      I wonder if you could use clang/llvm to do a super-JIT by having it recompile its IR as the program runs, taking advantage of profiling to optimize the hot paths.
      • SkiFire136 days ago
        Profile guided optimizations are already a thing and don't need JITting your program.
        • mannyv5 days ago
          Profile guided optimization is a static operation that's done after profiling the running app - unless the state of the art has changed in the last few years.
    • 6 days ago
      undefined
  • ajross6 days ago
    This seems poorly grounded. In fact almost three decades after the release of the Java HotSpot runtime we're still waiting for even one system to produce the promised advantages. I guess consensus is that V8 has come closest?

    But the reality is that hand-optimized AoT builds remain the gold standard for performance work.

    • noelwelsh6 days ago
      The benchmarks I have seen show Hotspot is ahead of V8. E.g. https://stefan-marr.de/papers/oopsla-larose-et-al-ast-vs-byt...

      What makes this very complicated is that 1) language design plays a big part in performance and 2) CPUs change as well and this anecdotally seems to have more impact on interpreter than compiler performance.

      With regards to 1), consider optimizing Javascript. It doesn't have machine integers, so you have to do a bunch of analysis to figure when something is being used as an integer and then you can make that code fast. There are many other cases. Python is even worse in this regard. In comparison AOT compiled languages are usually designed to be fast, so they make tradeoffs that favour performance at the cost of some level of abstraction / expressivity. The JVM is somewhere in the middle, and so is its performance.

      With regards to 2) this paper is an example, as is https://inria.hal.science/hal-01100647/file/InterpIBr-hal.pd...

      • MaxBarraclough6 days ago
        > you have to do a bunch of analysis to figure when something is being used as an integer and then you can make that code fast

        It doesn't get much attention now that WASM exists, but asm.js essentially solves this, so a more head-to-head comparison ought to be possible. (V8 has optimisations specific to asm.js.)

        https://en.wikipedia.org/wiki/Asm.js

        • IainIreland6 days ago
          asm.js solves this in the specific case where somebody has compiled their C/C++ code to target asm.js. It doesn't solve it for arbitrary JS code.

          asm.js is more like a weird frontend to wasm than a dialect of JS.

          • lern_too_spel6 days ago
            No, if you just use the standard JavaScript cast to integer incantation, |0, v8 will optimize it. asm.js is valid JavaScript.
          • MaxBarraclough5 days ago
            Sure, but that was essentially my point. If we're trying to compare HotSpot and V8 for similar input code, Java and asm.js seem closer than Java and full-blown JavaScript with its dynamic typing.
      • ajross6 days ago
        With all respect that sounds like excuse-making. I mean, yeah, Javascript and JVM and .NET are slower runtimes than C or Rust[1]. Nonetheless that's the world we live in, and if you have a performance-sensitive problem to solve you pick up rustc or g++ and not a managed runtime. If that's wrong, someone's got to actually show that it's wrong.

        [1] Maybe Go or Swift would be more apples-to-apples. But even then are there clear benchmarks showing Kotlin or C# beating similar AoT code? If anything the general sense of the community is that Go is faster than Java.

        • noelwelsh6 days ago
          Excuses for what? I'm not the elected representative for JIT compiled languages, sworn to defend them. There are technical reasons they tend to be slower. I was sketching some of them.
          • mrkeen5 days ago
            I think the above comments are because JIT gets so much positive press, someone wandering in from outside could be mistaken for thinking that JIT isn't coming 2nd in a two-man race with AOT.

            I've been around long enough to hear that Java and JIT are gonna overtake C++ any day now.

            The title on this article doesn't help.

        • wiseowise6 days ago
          https://devblogs.microsoft.com/oldnewthing/20060731-15/?p=30...

          https://blog.codinghorror.com/on-managed-code-performance-ag...

          And that was 2005. Modern .NET is much, much faster.

          > If anything the general sense of the community is that Go is faster than Java.

          Faster where?

        • pca0061326 days ago
          When things are performance-sensitive, you want things to be tunable and predictable. Good luck playing with the JIT if you rely that for performance...
          • pjmlp6 days ago
            Good luck with AOT as well, unless you hardcode the target hardware, like game consoles.
    • titzer6 days ago
      > But the reality is that hand-optimized AoT builds remain the gold standard for performance work.

      It's considerably more complicated than that. After working in this area for 25 years, I have vacillated between extremes over decades-long arcs. The reality is much more nuanced than a four sentence HN comment. Profile and measure and stare at machine code. If you don't do that daily, it's hand waving and having hunches.

      • cogman106 days ago
        I'd also point out that it's an ever-shifting landscape. What was slow yesterday might not be today.

        In my experience, while there are some negatives of the runtime selected, the vast majority of performance is won or lost at the algorithm level. It really doesn't matter that rust can be faster than ruby if you chose an O(n^3) algorithm. Rust will run the O(n^3) algorithm faster than ruby, for sure, but ruby will beat the pants off of rust if someone converts it into an O(n) algorithm.

        It only starts mattering if you've already have an O(n) algorithm. However, in my experience, a LOT of programmers are happy writing a n^3 and moving on to the next task without considering what this will do.

            for (i : foo) { 
              for (j : foo) { 
                for (k : foo) { 
                  bar(i, j, k)
                }
              }
            }
        • neonsunset6 days ago
          You may be underestimating the degree of difference in performance between Ruby and Rust.

          Here's comparison of Ruby with JS, and Rust is of course faster still: https://benchmarksgame-team.pages.debian.net/benchmarksgame/...

          If the code runs 100 times faster, it might just offset even highly inefficient implementation.

          > a LOT of programmers are happy writing a n^3

          I have the same experience.

          Unfortunately, and this is an issue I keep fighting with in some .NET communities, languages like C, C++ and Rust tend to select for engineers which are more likely to care about writing reasonably efficient implementation.

          At the same time, higher-level languages sometimes can almost encourage the blindness to the real world model of computation, the execution implications be damned. In such languages you will encounter way more people who will write O(n^3) algorithm and will fight you tooth and nail to keep it that way because they have zero understanding of the fundamentals, wasting the heroic effort by the runtime/compiler to keep it running acceptably well.

          • titzer6 days ago
            > At the same time, higher-level languages sometimes can almost encourage the blindness to the real world model of computation, the execution implications be damned. In such languages you will encounter way more people who will write O(n^3) algorithm and will fight you tooth and nail to keep it that way because they have zero understanding of the fundamentals, wasting the heroic effort by the runtime/compiler to keep it running acceptably well.

            I would say this tracks. I spent some time doing research on JVMs and largely found that, for example, the Java community largely values building OO abstractions around program logic and structuring things in ways that generally require more runtime logic and safety checks. For example, Java generics are erased and replaced with casts in the bytecode. Those checks the JVM has to blindly perform in the interpreter and any lower compiler tiers that don't inline. Only when you get to opt tiers does the compiler start to inline enough to see enough context to be able to statically eliminate these checks.

            Of course Java hides these checks because they should never fail, so it's easy to forget they are there. As an API designer and as a budding library writer, Java programmers learn to use these abstractions, like the nicety of generics, in order to make things more general and usable. That's the higher priority, and when the decision criteria comes down to performance versus reuse, programmers choose reuse all the time.

            • cogman106 days ago
              > that generally require more runtime logic and safety checks.

              These safety checks and runtime logic are a constant factor in the performance of a given java application.

              Further, they are mostly miniscule compared to other things you are paying for by using java. The class check requires loading the object from main memory/cpu cache but the actual check is a single cycle cmp check. Considering the fact that that object will then be immediately used by the following code (hence warm in cache) the price really isn't comparable to the already existing overhead of reaching down into ram to fetch it.

              I won't say there aren't algorithms that will suffer, particularly if you are doing really heavy data crunching that extra check can be somewhat murder. However, in the very grand scheme of things, it's nothing compared to all the memory loading that goes on in a typical java application.

              That is to say, the extra class cast on an `ArrayList<Point>` is nothing compared to the cost of the memory lookups when you do

                  int sum = 0;
                  for (var point : points) {
                    sum += point.x + point.y + point.z
                  }
              • 6 days ago
                undefined
              • neonsunset6 days ago
                > The class check requires loading the object from main memory/cpu cache but the actual check is a single cycle cmp check.

                Only a guard or, possibly, a final class type-check (at least it's the case for sealed classes or exact type comparisons in .NET). For anything else this will be more involved due to inheritance.

                Obviously for any length above ~3 this won't dominate but JVM type system defaults don't make all this any easier.

                • wbl5 days ago
                  I'm not an expert but I think that the compiler requires the exact class on insertion so at use it's just a check.
          • cogman106 days ago
            > If the code runs 100 times faster, it might just offset even highly inefficient implementation.

            That's the danger of algorithmic complexity. 100 is a constant factor. As n grows, the effects of that constant factor are overwhelmed by the algorithmic inefficiency. For something like an n^3, it really doesn't take long before the algorithm dominates the performance over any language considerations.

            To put it in perspective, if the rust n^3 algorithm is 100x faster with n=10 compared to the ruby O(n) algorithm, it takes only around n=50 before ruby ends up faster than rust.

            For the most part, the runtime complexity of languages is a relatively fixed factor. That's why algorithmic complexity ends up being extremely important, more so than the language choice.

            I used to not think this way, but the more I've dealt with performance tuning the more I've come to realize the wisdom of Big Oh in day to day programming. Too many devs will justify an O(n^2) algorithm as being "simple" even though the O(n) algorithm is often just adding a new hashtable to the mix.

          • dominicrose5 days ago
            I've found this website provides different results: https://programming-language-benchmarks.vercel.app/typescrip...

            It also shows different Ruby implementations. I've tried truffleruby myself and it's blazing fast on long-running CPU-intensive tasks.

            • neonsunset5 days ago
              The tests on this website run for very little time indeed. They use input values that e.g. the original BehnchmarksGame suggests for validation before running for a longer time to get actual performance (another case in point - surely you want to run a web server longer than a couple hundred milliseconds). In my experience the data there does not always replicate to what you get in real world scenarios. It’s an unfortunate tradeoff because the benchmark runs when you want to support so many languages will take a very long time, but in my opinion it’s better to have numbers that are useful for making informed decisions over pure quantity.

              If you have something specific in mind, it can be more interesting to build and measure the exact scenario you’d like to know about (standard caveats to benchmarking properly apply), which is quite easier if you have, say, just two languages.

    • pjmlp6 days ago
      JVM implementations, especially those with PGO feedback loop across runs do quite well.

      Likewise modern Android, runs reasonably well with its mix of JIT, AOT with JIT PGO metadata, baseline profiles shared across devices via Play Store.

      The gold standard for anyone that actually cares about ultimate performance is hand written Assembly, naturally guided with a profilers capable to measure everything that the CPU is doing like VTune.

    • neonsunset6 days ago
      If you pit virtual-call-heavy code written in C++ against C#, C# will come out on top every single time, especially if you consume dynamically-linked dependencies or if you can't afford to wait until the heat death of the universe when all the LTO plugins finish their job.

      Or if you use SIMD-heavy path and your binary is built against, say, X86-64-v2/3 and the target supports AVX512, .NET will happily use the entirety of AVX512 thanks to JIT even when still using 256b-wide operations (i.e. bespoke path that uses Vector256) with AVX512VL. This tends to surpass what you can get out of runtime dispatch under LLVM.

      re: Java challenges - those stem from the JVM bytecode being a very difficult optimization target i.e. every call is virtual by default with complex dispatch strategy, everything is a heap-allocated object by default save for very few primitives, generics lose type information and are never monomorphized - PGO optimization through tiered compilation and resulting guarded devirtualization and object escape analysis is something that reclaims performance in Java and makes it acceptable. C and C++ with templates are a massively easier optimization target for GCC, and GCC does not operate under strict time constraints too. Therefore we have the results that we do.

      Also interesting data points here if you'd like to look at AOT capabilities of higher-level languages:

      https://benchmarksgame-team.pages.debian.net/benchmarksgame/...

    • IshKebab6 days ago
      I agree, the "JITs can be faster because X Y Z" arguments have never turned into "JITs are actually faster".

      Maybe that's because JIT is almost always used in languages that were slowed in the first place, e.g. due to GC.

      Is there a JITing C compiler, or something like that? Would that even make sense?

      • sitkack6 days ago
        Binary Translation could be seen as a generalized JIT for native code.

        Dynamo: A Transparent Dynamic Optimization System https://dl.acm.org/doi/pdf/10.1145/358438.349303

        > We describe the design and implementation of Dynamo, a software dynamic optimization system that is capable of transparently improving the performance of a native instruction stream as it executes on the processor. The input native instruction stream to Dynamo can be dynamically generated (by a JIT for example), or it can come from the execution of a statically compiled native binary. This paper evaluates the Dynamo system in the latter, more challenging situation, in order to emphasize the limits, rather than the potential, of the system. Our experiments demonstrate that even statically optimized native binaries can be accelerated Dynamo, and often by a significant degree. For example, the average performance of -O optimized SpecInt95 benchmark binaries created by the HP product C compiler is improved to a level comparable to their -O4 optimized version running without Dynamo. Dynamo achieves this by focusing its efforts on optimization opportunities that tend to manifest only at runtime, and hence opportunities that might be difficult for a static compiler to exploit. Dynamo's operation is transparent in the sense that it does not depend on any user annotations or binary instrumentation, and does not require multiple runs, or any special compiler, operating system or hardware support. The Dynamo prototype presented here is a realistic implementation running on an HP PA-8000 workstation under the HPUX 10.20 operating system.

        https://www.semanticscholar.org/paper/Dynamo%3A-a-transparen...

      • remexre6 days ago
        Maybe the "allocate as little as possible, use sun.misc.Unsafe a lot, have lots of long-lived global arrays" style of Java programming some high-performance Java programs use would get close to being a good stand-in.
      • o11c6 days ago
        I'm pretty sure the major penalty is the lack of inline objects (thus requiring lots of pointer-chasing), rather than GC. GC will give you unpredictable performance but allocation has a penalty regardless of approach.

        For purely array-based code, JIT is the only factor and Java can seriously compete with C/C++. It's impossible to be competitive with idiomatic Java code though.

        C# has structs (value classes) if you bother to use them. Java has something allegedly similar with Project Valhalla, but my observation indicates they completely misunderstand the problem and their solution is worthless.

        • cogman106 days ago
          Inline objects is a huge hit that hopefully gets solved soon.

          But I'd posit that one programming pattern enabled by a GC is concurrent programming. Java can happily create a bunch of promises/futures, throw them at a thread pool and let that be crunched without worrying about the lifetimes of stuff sent in or returned from these futures.

          For single threaded stuff, C probably has java beat on memory and runtime. However, for multithreading it's simply easier to crank out correct threaded code in Java than it is in C.

          IMO, this is what has made Go so appealing. Go doesn't produce the fastest binaries on the planet, but it does have nice concurrency primitives and a GC that makes highly parallel processes easy.

          • o11c6 days ago
            I am extremely skeptical of any "concurrency made easy" claims. Rust has probably the best claim in that area but it's still pretty limited, and comes at the cost of making it hard to write normal code.
            • cogman106 days ago
              I wouldn't (and didn't) say "easy" just "easier". The thing that makes rust concurrency so gnarly to work with is the lifetime battles you have to do in order to make it work. That's still better than C/C++ because you aren't dealing with accidental memory corruption when the wrong thread frees memory at the wrong time.

              For languages like rust/C/C++, thread safe data structures are VERY hard to pull off. That's because tracking the lifetime of things tracked by the data structures introduces all sorts of heartburn.

              What GCed languages buy you is not needing to track those lifetimes. Yes, you can still have data races and shared memory mutation problems, but you can also write thread safe data structures like caches without the herculean efforts needed to communicate with users of the cache who owns what when and when that thing dies.

              The best that Rust and C++ can do to solve these problems is ARC and a LOT of copying.

        • cempaka6 days ago
          > Java has something allegedly similar with Project Valhalla, but my observation indicates they completely misunderstand the problem and their solution is worthless.

          Hahah spicy take, I'd be interested to hear more. It definitely might not bode well that they opened the "Generics Reification" talk at JVMLS 2024 with "we have no answers, only problems."

          • o11c6 days ago
            I'm not going to investigate it again, there was probably more than this. But from what I recall:

            * The compiler isn't actually guaranteed to store them by value at all. Basically, they're written to be an "optional extension" rather than a first-class feature in their own right.

            * Everything is forced to be immutable, so you can't actually write most of the code that would take advantage of value types in the first place. Hot take: functional programming is mainly a bad workaround for languages that don't support value types in the first place.

            • cempaka6 days ago
              The immutable thing is actually being sold as a strength, i.e. "you write your nice clean immutable code, and if you've tagged it as a value type or flattenable, the compiler will figure out it doesn't need a new allocation and will update the existing value inline." I think they see it as in keeping with the Java culture of "you get very good performance for straightforward code" but I definitely agree there's a hazard of introducing an unnecessary impedance mismatch.
              • neonsunset6 days ago
                It will be a lot of work for the compiler to unspill modifications on any non-trivial data structure and reduce register pressure, especially since it's Java's first foray into structs :)

                (I suppose if the list of things you can do with structs is very short, this will be nowhere near as useful but will also reduce the amount of compiler changes)

                • pjmlp5 days ago
                  The whole point is to introduce value types without a .NET Framework vs .NET Core schism.

                  Random jars taken out of Maven central should be able to continue to execute in a Valhala enabled JVM, without changes in their original semantics, while at the same time being able somewhat to take advantage of the Valhala world.

                  Naturally there is always the issue of APIs that no longer exist like Thread.stop(), but that is orthogonal to the idea to have binary libraries keep working in a new value aware world.

                  There are tons of compiler changes, minimal semantic changes and keeping bytecode ABI as much as possible is the engineering challenge.

        • neonsunset6 days ago
          To be fair, .NET has way more than just structs. But yes, they are a starting point.
      • azakai6 days ago
        > Is there a JITing C compiler, or something like that?

        Yes, for example, compiling C to JavaScript (or asm.js, etc. [0]) leads to the C code being JITed.

        And yes, there are definitely benchmarks where this is actually faster. Any time that a typical C compiler can't see that inlining makes sense is such an opportunity, as the JIT compiler sees the runtime behavior. The speedup can be very large. However, in practice, most codebases get inlined well using clang/gcc/etc., leaving few such opportunities.

        [0] This may also happen when compiling C to WebAssembly, but it depends on whether the wasm runtime does JIT optimizations - many do not and instead focus on static optimizations, for simplicity.

      • pjmlp6 days ago
        C++/CLI is one example, it is C++, not C, but example holds.
        • do_not_redeem6 days ago
          Now the money question: can anyone come up with a benchmark where, due to the JIT, C++/CLI runs faster than normal C++ compiled for the same CPU?
          • bjoli6 days ago
            Writing a program where a jit version is faster than the aot version is just an exercise in knowing the limitations of AOT.

            People have been doing runtime code generation for a very long time for exactly this reason.

            A general implementation faster than, say, g++ is a completely different beast.

        • zabzonk6 days ago
          It is not C++ (or C) but a Microsoft invented language - which is OK, but don't confuse it with C++ anymore than MS have already done
          • pjmlp6 days ago
            I love how folks worship GCC and clang compiler extensions as C and C++ or UNIX compiler vendors in general, including embedded RTOS toolchains, but when Microsoft does it, for whatever reason doesn't count.

            Two weights, two measures.

            • zabzonk6 days ago
              I certainly don't "worship" any compiler, and am pretty quick to point out non-standard extensions in people's code. But C++/CLI goes far, far beyond extensions, and becomes a completely different language to C++, both syntactically and semantically.
              • pjmlp6 days ago
                Just like Linux kernel can only be compiled with GCC, or compilers that equally implement the same language extensions that aren't at all C, not being part of C23 ISO/IEC 9899:2024, including compiler switches that change C semantics as strict provenance.

                If you want to further discuss what is what, lets see how up to date is your ISO knowledge, versus the plethora of extensions across C and C++ compilers.

    • ForTheKidz6 days ago
      > I guess consensus is that V8 has come closest?

      V8 better than the JVM? Insanity, maybe it can come to within an order of magnitude in terms of performance.

      • edflsafoiewq6 days ago
        Comes closest to realizing the concept of a JIT that is better than AOT.
        • ForTheKidz6 days ago
          I think that's completely silly framing; you can AOT compile any code better—or at least, just as well—if you already know how you want it to perform at runtime. Any efficiency gain would necessarily need to be in the context of total productivity.
          • ajross6 days ago
            > I think that's completely silly framing

            It's literally the framing of the linked article though, which takes as a prior that JIT compilers are already ahead of AoT toolchains. And... they aren't!

            • noelwelsh6 days ago
              They are comparing Javascript JIT to Javascript AOT, to avoid the issue of language design.

              "The fastest contemporary JavaScript implementations use JIT compilers [27]. ... However, JIT compilers may not be desirable or simply not available in some contexts, for instance if programs are to be executed on platforms with too limited resources or if the architecture forbids dynamic code generation. Ahead of time (AoT) compilers offer a response to these situations.

              Hopc [25] is an AoT JavaScript-to-C compiler. Its performance is often in the same range as that of the fastest JIT compilers but its impossibility to adapt the code executed at runtime seems a handicap for some patterns and benchmarks [27]."

              In the context of JS it's reasonable to think that JIT may have an advantage, as the language is difficult to statically analyse.

            • ForTheKidz6 days ago
              > This gives them an advantage when compared to Ahead-of-Time (AoT) compilers that must choose the code to generate once for all.

              I assumed they were talking about the general case, which is nearly useless to discuss. I just kind of filtered it out as internecine bickering amongst academics. The actual data are still interesting tho.

    • pizlonator6 days ago
      > Java HotSpot runtime we're still waiting for even one system to produce the promised advantages.

      What promised advantages are you waiting on?

      There are lots of systems that have architectures that are similar to HotSpot, or that surpass it in some way. V8 is just one.

      • CamouflagedKiwi6 days ago
        There were many many statements made that JIT compilers could be faster than AOT compilers because they had more information to use at runtime - originally this was mostly aimed at Java/HotSpot which has not, in practice, significantly displaced languages like C or C++ (or these days Rust) from high-performance work.
        • pizlonator6 days ago
          Yeah those statements were overly optimistic and I don’t think they’re representative of what most people in the JIT field think. It’s also not what I as a JIT engineer would have promised you.

          The actual promise is just: JITs make dynamic languages faster and they are better at doing that than AOTs. I think lots of systems have delivered on that promise.

          • titzer6 days ago
            I concur here. 20 years ago I was a JIT cheerleader and in the intervening time I've realized that you're only going to get the super-optimized hot inner loop perfect after the JIT and runtime has chugged through a ton of other slop that tends to make programs bloated and slow. And the Java ecosystem in particular has a tendency to build a ton of ceremony and abstractions that the runtime system has to boil away, but can only really managed to do so with deep inlining and a lot of optimizations, many of which are speculative.

            > JITs make dynamic languages faster and they are better at doing that than AOTs

            Indeed.

            • theLiminator6 days ago
              Yeah, i'm curious how well JIT works on languages with less dynamism. Perhaps a combination of AOT + JIT on a strong statically typed language might provide the best of both worlds. Though I suppose PGO kinda does that.
              • titzer5 days ago
                I think about this a bit in the context of Virgil. Virgil's compiler is a whole-program optimizing compiler that does a lot of devirtualization and constant-folding. In the higher optimizations it does a bit of inlining, but I haven't found the huge 10X speedups that you get in, e.g. Java. More like 10-40% performance improvements from inlining.

                I think Virgil could benefit a little from runtime information. For example, it could make better inlining and register allocation decisions, as well as code layout. I have a feeling that Virgil code would benefit a little from guarded inlining, but I don't think full-on speculation would help. In general, a lot of polymorphism can melt away if you can look at the whole program. Couple that also with Virgil's compiler doing monomorphization, which means that using parametric polymorphism costs only code space, and I think the gap is pretty small. I'd expect you could maybe get another 10-20% from these things all together--that's a lot of work to get a small amount.

          • CamouflagedKiwi6 days ago
            Yup, agreed, in the case of dynamic languages it's much clearer and the evidence is a lot more favourable.

            The linked article doesn't help here because the abstract only mentions Javascript in the context of their work to prove their concept, but the body of the paper is clearer that it is discussing JIT vs AOT in the context of Javascript specifically.

            • pizlonator6 days ago
              I think their findings are applicable to lots of languages where the fastest known implementation is JIT based.

              Not all “JIT dominant” languages rely on ICs as part of the JIT’s performance story, but enough of them do that it’s worth studying.

              And JS happens to be the language where ICs have been taken the furthest, in terms of just how many different ways have been investigated and how many person years went into tuning them. So in some sense they’re picking the hardest fight. I think that’s a good thing.

          • vips7L6 days ago
            HotSpot definitely has delivered on that too. It's a super dynamic runtime with reflection and randomly loaded jars even if Java the language is terse.
        • mike_hearn4 days ago
          It has in a bunch of places. C# is widely used in video games, and Java is widely used in financial trading including HFT scenarios where every millisecond matters. And obviously in Android it's used to write large parts of the OS.

          There are places where it hasn't, but that's more due to missing features than JIT vs AOT. Java only got SIMD support recently and it's still in a preview mode, partly because it's all blocking on Valhalla value types.

          PGO can make a big difference to C++ codebases, and as JIT is basically PGO with better deployment/developer ergonomics it could probably also work in C++ too. It's just that the most performance sensitive C++ codebases like Chrome prefer to take the build system complexity hit and get the benefits of PGO without the costs, and most C++ codebases just go without.

        • pjmlp6 days ago
          I guess distributed systems and OS GUI frameworks aren't it then.
    • paulddraper6 days ago
      > we're still waiting for even one system to produce the promised advantages

      To be clear, successful JIT do runtime profiling+optimization, at significant benefit.

      But on net, JIT languages are slower.

      It is a valid question to ask whether AOT binaries can selectively use runtime optimizations, making them even faster.

    • twoodfin6 days ago
      Hand-optimized AoT builds with solid profile-based feedback, right?
    • 6 days ago
      undefined
  • devit6 days ago
    The paper seems to start with the bizarre assumption that AOT compilers need to "catch up" with JIT compilers and in particular that they benefit from inline caches for member lookup.

    But the fact is that AOT compilers are usually for well-designed languages that don't need those inline caches because the designers properly specified a type system that would guarantee a field is always stored at the same offset.

    They might benefit from a similar mechanism to predict branches and indirect branches (i.e. virtual/dynamic dispatch), but they already have compile-time profile-guided optimization and CPU branch predictors at runtime.

    Furthermore, for branches that always go in one direction except for seldom changes, there are also frameworks like the Linux kernel "alternatives" and "static key" mechanisms.

    So the opportunity for making things better with self-modifying code is limited to code where all those mechanisms don't work well, and the overhead of the runtime profiling is worth it.

    Which is probably very rare and not worth bringing it a JIT compiler for.

    • pizlonator6 days ago
      AOTs are behind JITs for dynamic languages. It’s super interesting to study how to make AOTs catch up in that space, so I’m glad that these folks made an effort and reported the results!
      • Sparkyte6 days ago
        The trade offs between them are meaningful. Also Rust ain't bad for an AOT.
    • 6 days ago
      undefined