I'm running the following Java code on a laptop with 2.7 GHz Intel Core i7. I intended to let it measure how long it takes to finish a loop with 2^32 iterations, which I expected to be roughly 1.48 seconds (4/2.7 = 1.48).
But actually it only takes 2 milliseconds, instead of 1.48 s. I'm wondering if this is a result of any JVM optimization underneath?
public static void main(String[] args)
{
long start = System.nanoTime();
for (int i = Integer.MIN_VALUE; i < Integer.MAX_VALUE; i++){
}
long finish = System.nanoTime();
long d = (finish - start) / 1000000;
System.out.println("Used " + d);
}
There are one of two possibilities going on here:
The compiler realized that the loop is redundant and doing nothing so it optimized it away.
The JIT (just-in-time compiler) realized that the loop is redundant and doing nothing, so it optimized it away.
Modern compilers are very intelligent; they can see when code is useless. Try putting an empty loop into GodBolt and look at the output, then turn on -O2
optimizations, you will see that the output is something along the lines of
main():
xor eax, eax
ret
I would like to clarify something, in Java most of the optimizations are done by the JIT. In some other languages (like C/C++) most of the optimizations are done by the first compiler.
It looks like it was optimized away by JIT compiler. When I turn it off (-Djava.compiler=NONE
), the code runs much slower:
$ javac MyClass.java
$ java MyClass
Used 4
$ java -Djava.compiler=NONE MyClass
Used 40409
I put OP's code inside of class MyClass
.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With