When I reading the book of "Scala in depth", it mentions that HotSpot compiler has several important features, one of them is "Dynamic De-Optimization":
It is the ability to determine if an optimization did not, in fact, improve performance and undo that optimization,allowing others to be applied
It seems HotSpot will try all kinds of "optimization"s, and choose the best one of them.
But I'm not quite understand it. Is the "optimization" here all provided by HotSpot? I mean programmers often try to optimize the code with some skills, will HotSpot handle them?
And is there any common "optimization"s will HotSpot try?
Oracle provides a (rather concise) summary of this performance techniques applied by the JVM. It explains:
Deoptimization is the process of changing an optimized stack frame to an unoptimized one. With respect to compiled methods, it is also the process of throwing away code with invalid optimistic optimizations, and replacing it by less-optimized, more robust code. A method may in principle be deoptimized dozens of times.
In this summary, the reasons for deoptimization are listed as follows:
- The compiler may stub out an untaken branch and deoptimize if it is ever taken.
- Similarly for low-level safety checks that have historically never failed.
- If a call site or cast encounters an unexpected type, the compiler deoptimizes.
- If a class is loaded that invalidates an earlier class hierarchy analysis, any affected method activations, in any thread, are forced to a safepoint and deoptimized.
- Such indirect deoptimization is mediated by the dependency system. If the compiler makes an unchecked assumption, it must register a checkable dependency. (E.g., that class Foo has no subclasses, or method Foo.bar is has no overrides.)
Personally, I found this blog entry on micro-benchmarking quite readable which also covers the topics of optimization and deoptimization on the HotSpot VM. Also, I can recommend reading through this presentation.
HotSpot's optimizations are of different kind than what developers do at the Java source code level, although some of them have the same net effect.
This is a part of the JIT compiler's arsenal:
The most interesting part is the synergy between some of the optimizations, such as:
Your quote, however, is to the best of my knowledge wrong. Optimized code doesn't engage in any self-profiling because it would slow it down. The only condition for deoptimization is the violation of optimistic assumptions under which the code was JIT-compiled. Example: a given method call site receives just one type of object, gets specialized to that object (compiled as a monomorphic call site), but then, later on, a different object type appears. Now the optimized code cannot be executed and must be deoptimized.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With