Possible Duplicate:
JIT compiler vs offline compilers
So until a few minutes ago I didn't really understand what the difference between a JIT compiler and an interpreter is. Browsing through SO, I found the answer, which brought up the question in the title. As far as I've found, JIT compilers have the benefit of being able to use the specific processor it's running on and can thus make better optimized programs. Could somebody please give me a comparison of the pros and cons of each?
A JIT has access to dynamic runtime information whereas a standard compiler doesn't and can make better optimizations like inlining functions that are used frequently. This is in contrast to a traditional compiler that compiles all the code to machine language before the program is first run.
If the behavior of the application changes while it is running, the runtime environment can recompile the code. Some of the disadvantages include startup delays and the overhead of compilation during runtime. To limit the overhead, many JIT compilers only compile the code paths that are frequently used.
Advantages of just-in-time compilationJIT compilers need less memory usage. JIT compilers run after a program starts. Code optimization can be done while the code is running. Any page faults can be reduced.
Compilers also have disadvantages: The source code must be re-compiled every time the programmer changes the program. Source code compiled on one platform will not run on another - the machine code is specific to the processor's architecture.
Difference between a JIT compiler and an interpreter
To keep it simple, let's just say that an interpreter will run the bytecode (intermediate code/language). When the VM/interpreter decides it is better to do so, the JIT compilation mechanism will translate that same bytecode into native code targetted for the hardware in question, with focus on the type of optimizations requested.
So basically a JIT might produce a faster executable but take way longer to compile?
I think what you are missing is that the JIT compilation happens at runtime and not compile time (unlike an "offline" compiler)
Compiling code is not free, takes time also. If it invests time on compiling it and then goes to run it only a few times, it might not have made a good trade. So the VM still has to decide what to define as a "hot spot" and JIT-compile it.
Allow me to give examples on the Java virtual machine (JVM):
The JVM can take switches with which you can define the threshold after which the code will be JIT compiled. -XX:CompileThreshold=10000
To illustrate the cost of the JIT compilation time, suppose you set that threshold to 20, and have a piece of code that needs to run 21 times. What happens is after it runs 20 times, the VM will now invest some time into JIT compiling it. Now you have native code from the JIT compilation, but it will only run for one more time (the 21), which may not have brought any performance boost to make up for the JIT process.
I hope this illustrates it.
Here is a JVM switch that shows the time spent on JIT compilation -XX:-CITime
"Prints time spent in JIT Compiler"
Side Note: I don't think it's a "big deal", just something I wanted to point out since you brought up the question.
JIT compilation doesn't inherently mean it is easy to disassemble. That is more implementation-dependent, such as with Java binaries. Note, however, that JIT can be applied to any kind of executable, whether it is Java, Python or even an already-compiled binary from C++ or similar. (IIRC, the Dynamo project involved re-compiling such binaries on-the-fly to increase performance.)
The trade-off for JIT compilation is that while the process's goal is to increase runtime performance, the process actually occurs at runtime as well, and so it incurs overhead while analyzing, compiling, and validating code fragments. If the implementation is inefficient or not enough optimizations occur, then it actually produces a performance degradation.
The other trade-off is that in some cases the JIT compilation can be very wasteful. For example, consider a self-modifying executable. If you compile a fragment of code, and then the executable modifies that fragment, you have to throw away the compiled fragment and then re-analyze that segment to determine if it is worth re-compiling. If this happens frequently, there is a significant performance hit.
Finally, there is a hit in memory consumption, as compiled code fragments must reside in memory in order to be effective. This can make it impractical for devices with limited amounts of memory, or else extremely difficult to implement well.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With