Interpreters do a lot of extra work, so it is understandable that they end up significantly slower than native machine code. But languages such as C# or Java have JIT compilers, which supposedly compile to platform native machine code.
And yet, according to benchmarks that seem legit enough, in most of the cases are still 2-4x times slower than C/C++? Of course, I mean compared to equally optimized C/C++ code. I am well aware of the optimization benefits of JIT compilation and their ability to produce code that is faster than poorly optimized C+C++.
And after all that noise about how good the Java memory allocation is, why such a horrendous memory usage? 2x to 50x, on average about 30x times more memory is being used across that particular benchmark suite, which is nothing to sneeze at...
NOTE that I don't want to start a WAR, I am asking about the technical details which define those performance and efficiency figures.
JIT compilers translate continuously, as with interpreters, but caching of compiled code minimizes lag on future execution of the same code during a given run. Since only part of the program is compiled, there is significantly less lag than if the entire program were compiled prior to execution.
A JIT compiler only looks at the bytecode once1, and compiles it to native code which can then be understood directly by the computer - no further translation required. The translation takes time, so if you can do it just the once, it's more efficient.
The real reason that JIT is used is because it's more flexible and portable without being too slow (as an interpreter is). A JIT allows you to run an arbitrary byte-code instead of compiling directly to machine code, which allows more portability across different platforms.
A JIT compiler can be faster because the machine code is being generated on the exact machine that it will also execute on. This means that the JIT has the best possible information available to it to emit optimized code.
Some reasons for differences;
JIT compilers mostly compile quickly and skip some optimizations that take longer to find.
VM's often enforce safety and this slows execution. E.g. Array access is always bounds checked in .Net unless guaranteed within the correct range
Using SSE (great for performance if applicable) is easy from C++ and hard from current VM's
Performance gets more priority in C++ over other aspects when compared to VM's
VM's often keep unused memory a while before returning to the OS seeming to 'use' more memory.
Some VM's make objects of value types like int/ulong.. adding object memory overhead
Some VM's auto-Align data structures a lot wasting memory (for performance gains)
Some VM's implement a boolean as int (4 bytes), showing little focus on memoryconservation.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With