Compiled code such as C
consumes little memory.
Interpreted code such as Python
consumes more memory, which is understandable.
With JIT, a program is (selectively) compiled into machine code at run time. So shouldn't the memory consumption of a JIT'ed program be somewhere between that of a compiled and an interpreted program?
Instead a JIT'ed program (such as PyPy
) consume several times more memory than the equivalent interpreted program (such as Python
). Why?
The JIT-compiled code is actually running directly on the bare metal whereas interpreted code has to be continually reinterpreted by the interpreter. The interpreter is no longer having to reprocess and reprocess the byte code.
A JIT compiler can be faster because the machine code is being generated on the exact machine that it will also execute on. This means that the JIT has the best possible information available to it to emit optimized code.
JIT code generally offers far better performance than interpreters. In addition, it can in some cases offer better performance than static compilation, as many optimizations are only feasible at run-time: The compilation can be optimized to the targeted CPU and the operating system model where the application runs.
A compiler compiles (translates) the given program to executable code (whole code at a time). A JIT compiler performs a similar task but it is used by JVM internally, to translate the hotspots in the byte code. A compiler compiles (translates) the given program to executable code (whole code at a time).
Tracing JIT compilers take quite a bit more memory due to the fact that they need to keep not only the bytecode for the VM, but also the directly executable machine code as well. this is only half the story however.
Most JIT's will also keep a lot of meta data about the bytecode (and even the machine code) to allow them to determine what needs to be JIT'ed and what can be left alone. Tracing JIT's (such as LuaJIT) also create trace snapshots which are used to fine tune code at run time, performing things like loop unrolling or branch reordering.
Some also keep caches of commonly used code segments or fast lookup buffers to speed up creation of JIT'ed code (LuaJIT does this via DynAsm, it can actually help reduce memory usage when done correctly, as is the case with dynasm).
The memory usage greatly depends on the JIT engine employed and the nature of the language it compiles (strongly vs weakly-typed). some JIT's employ advanced techniques such as SSA based register allocators and variable livelyness analysis, these sort of optimizations helps consume memory as well, along with the more common things like loop variable hoisting.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With