The canonical definition of JIT is "compilation during execution of a program". Usually, a program is being interpreted first, then switches to compiled code in the middle of execution. This is not what this article does.
What this article does is sometimes called on-the-fly AOT, or just on-the-fly compilation. I'd prefer not overloading the term "JIT".
I agree what they have isn't JIT compilation, but not for that reason. Tiered execution was never a central part of JIT compilation either. It was a fairly new invention in comparison.
The reason what they describe isn't JIT compilation is IMO fairly boring: it's not compiling the input program in any meaningful way, but simply writing hard-coded logic into executable memory that it already knows the program intended to perform. Sure there's a small degree of freedom based on the particular arithmetic operations being mentioned, but that's... very little. When your compiler already knows the high-level source code logic before it's even read the source code, it's... not a compiler. It's just a dynamic code emitter.
As to the actual difference between JIT vs. AOT... it may just come down to accounting. That is, on whether you can always exclude the compilation time/cost from the overall program execution time/cost or not. If so, you're compiling ahead of (execution) time. If not, you're compiling during execution time.
Well, this includes what I refer to as "on-the-fly" AOT, like SBCL, CCL, Chez Scheme... Even ECL can be configured to work this way. As I mentioned in another comment, people in those circles do not refer to these as "JIT" at all, instead saying "I wish my implementation was JIT instead of on-the-fly AOT"!
The program reads the logic from stdin and translates it into machine instructions. I can agree that there is not a lot of a freedom in what can be done, but I think it just means that the source language is not Turing complete. I don't believe that compiler needs to deal with a Turing complete language to claim the title "JIT compiler".
"Not Turing-complete" is quite the understatement.
A "compiler" is software that translates computer code from one programming language into another language. Not just any software that reads input and produces output.
The input language here is... not even a programming language to begin with. Literally all it can express is linear functions. My fixed-function calculator is more powerful than that! If this is a programming language then I guess everyone who ever typed on a calculator is a programmer too.
These terms are not related to the complexity of the problem. The first compilers could only translate for formulas, hence FORTRAN.
Of course, there are edge cases like embedding libtcc, but I think it's a reasonable definition.
Like the guesses above, I can understand difficulty with AOT compilation in conjunction with certain use cases; however, I can not think of a language that based on its definition would be less amenable to AOT compilation.
AOT situations where a lot of context is missing:
• Loosly typed languages. Code can be very general. Much more general than how it is actually used in any given situation, but without knowing what the full situation is, all that generality must be complied.
• Increment AOT compilation. If modules have been compiled separately, useful context wasn't available during optimization.
• Code whose structure is very sensitive to data statistics or other conditional runtime information. This is the prime advantage of JIT over AOT. Unless the AOT compiler is working in conjunction with representative data and a profiler.
Those are all cases where JIT has advantages.
A language where JIT is optimal, is by definition, less amenable to AOT compilation.
EDIT: at least GHC seems to be a traditional AOT compiler.
Discounting books, many other well written articles on JIT have been shared on HN over the years [0][1][2]; the one I particularly liked as it introduces the trinity in a concise way: Compiler, Interpreter, JIT https://nickdesaulniers.github.io/blog/2015/05/25/interprete... / https://archive.vn/HaFlQ (2015).
[0] How to JIT - an introduction, https://eli.thegreenplace.net/2013/11/05/how-to-jit-an-intro... (2013).
[1] Bytecode compilers and interpreters, https://bernsteinbear.com/blog/bytecode-interpreters/ (2019).
[2] Let's build a Simple Interp, https://ruslanspivak.com/lsbasi-part1/ (2015).
But I believe sysconf(_SC_PAGESIZE) will always be 4KB, because the “may” is at the user’s discretion, not the system’s. Except on Cosmopolitan where it will always be 64KB, because Windows NT for Alpha (yes, seriously).
from functools import reduce
def recurrence(ops, a0, n):
def transform(x, op):
return eval(repr(x) + op)
ops = ops.split()
m = reduce(transform, [op for op in ops if op[0] in ('*', '/')], 1)
b = reduce(transform, ops, 0)
for k in range(n + 1):
print('Term 0: ', a0 * m ** k + b * (m ** k - 1) / (m - 1))
> This is really only interesting if a particular (potentially really large) term of the sequence is desired, as opposed to all terms up to a point. The key observation is that any sequence of the given set of operations reduces to a linear recurrence which has the given solution.