I have read and heard a lot about how JIT compilers can make optimizations that are impossible for Native Code Compilers and that these optimizations can give huge performance boosts.
So I was wondering, what are the most important optimizations that, say, the .NET Framework or the JVM do that a native compiler cannot do? Also, how do these give huge performance boosts?
I don't know whether I've phrased this question properly, guess I may have a lot of explaining to do in the comments
A JIT compiler can be faster because the machine code is being generated on the exact machine that it will also execute on. This means that the JIT has the best possible information available to it to emit optimized code.
Just-in-time compilation is a method for improving the performance of interpreted programs. During execution the program may be compiled into native code to improve its performance. It is also known as dynamic compilation. Dynamic compilation has some advantages over static compilation.
A JIT compiler only looks at the bytecode once1, and compiles it to native code which can then be understood directly by the computer - no further translation required. The translation takes time, so if you can do it just the once, it's more efficient.
Compilers from bytecode to machine code are easier to write, because the portable bytecode compiler has already done much of the work. JIT code generally offers far better performance than interpreters.
I can give an example of one optimization. Suppose you have a function somewhere. (Think of this as C-like pseudocode.)
void function(MyClass x)
{
x.doSomething();
for (obj in x.getWidgets())
obj.doSomethingElse();
}
This is suitably vague. Suppose, however, that you only have one concrete class in your entire image that inherits from MyClass
: MyConcreteClass
. In that case, the JIT can inline doSomething
and getWidgets
. If it knows about the type returned from getWidgets
, then maybe it can inline doSomethingElse
as well.
Assuming here that MyClass
is not a final/sealed class, an ahead-of-time compiler cannot inline its method (it wouldn't know which functions to inline); for all the compiler knows, there are a hundred different implementations of MyClass
.
However, a JIT can optimize for the current state of the image. It can install a check in the beginning of each call to function
that makes sure that x
is a MyConcreteClass
, and then run the inlined version. If you dynamically load a module with another concrete class inheriting from MyClass
, then the check will fail and the JIT will recompile the function to be generic.
These are the only kinds of optimizations available to JIT compilers that aren't available to ahead-of-time compilers: optimizations that make use of information about the dynamic state of the program and recompile the program accordingly.
Note that some ahead-of-time compilers are capable of doing tricks typically ascribed to JIT compilers. For example, interprocedural optimization (or global optimization) and profile-driven optimization. GCC and Clang can use both of those tricks, but most people leave them off since it requires extra (human) work to turn them on. JIT compilers can leave those options enabled without bothering end users.
Huge performance boost: I haven't heard of any huge performance boost in general from JIT compilers. C and C++ programs are still fast without JIT. And many people still prefer Fortran for numerical work (with good reason).
Footnote: I'm not sure about your terminology. Aren't most JITs also native code compilers? The other types of compiler besides JIT I would call "ahead of time" or AOT, or perhaps "static". (And then there's the incredibly fuzzy line between "compiled" and "interpreted".)
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With