When running a Java 1.6 (1.6.0_03-b05) app I've added the -XX:+PrintCompilation
flag. On the output for some methods, in particular some of those that I know are getting called a lot, I see the text made not entrant
and made zombie
.
What do these mean? Best guess is that it's a decompilation step before recompiling either that method or a dependency with greater optimisation. Is that true? Why "zombie" and "entrant"?
Example, with quite a bit of time between some of these lines:
[... near the beginning]
42 jsr166y.LinkedTransferQueue::xfer (294 bytes)
[... much later]
42 made not entrant jsr166y.LinkedTransferQueue::xfer (294 bytes)
--- n sun.misc.Unsafe::compareAndSwapObject
170 jsr166y.LinkedTransferQueue::xfer (294 bytes)
170 made not entrant jsr166y.LinkedTransferQueue::xfer (294 bytes)
4% jsr166y.LinkedTransferQueue::xfer @ 29 (294 bytes)
171 jsr166y.LinkedTransferQueue::xfer (294 bytes)
[... even later]
42 made zombie jsr166y.LinkedTransferQueue::xfer (294 bytes)
170 made zombie jsr166y.LinkedTransferQueue::xfer (294 bytes)
171 made not entrant jsr166y.LinkedTransferQueue::xfer (294 bytes)
172 jsr166y.LinkedTransferQueue::xfer (294 bytes)
[... no further logs]
By default, the number of threads is set to 2 for the server JVM, to 1 for the client JVM, and it scales to the number of cores if tiered compilation is used. Sets the maximum code cache size (in bytes) for JIT-compiled code. This option is equivalent to -Xmaxjitcodesize .
Tiered Compilation allows the . NET runtime to substitute different assembly code method implementations for the same method during the lifetime of an application to achieve higher performance.
The threads that the client Just-in-Time compiler uses are known as C1 compiler threads. And those that the server Just-in-Time compiler uses are called C2 Compiler threads. The C1 and C2 compiler threads streamline your application's performance. Often, these threads can consume a high CPU, creating problems.
The just-in-time (JIT) compiler is the heart of the Java Virtual Machine. Nothing in the JVM affects performance more than the compiler, and choosing a compiler is one of the first decisions made when running a Java application—whether you are a Java developer or an end user.
I've pulled together some info on this on my blog. A Cliff Click comment I found says:
Zombie methods are methods whose code has been made invalid by class loading. Generally the server compiler makes aggressive inlining decisions of non-final methods. As long as the inlined method is never overridden the code is correct. When a subclass is loaded and the method overridden, the compiled code is broken for all future calls to it. The code gets declared "not entrant" (no future callers to the broken code), but sometimes existing callers can keep using the code. In the case of inlining, that's not good enough; existing callers' stack frames are "deoptimized" when they return to the code from nested calls (or just if they are running in the code). When no more stack frames hold PC's into the broken code it's declared a "zombie" - ready for removal once the GC gets around to it.
This is absolutely not an area of expertise for me, but I was interested and so did a bit of digging.
A couple of links you might find interesting: OpenJDK:nmethod.cpp, OpenJDK:nmethod.hpp.
A excerpt of nmethod.hpp
:
// Make the nmethod non entrant. The nmethod will continue to be
// alive. It is used when an uncommon trap happens. Returns true
// if this thread changed the state of the nmethod or false if
// another thread performed the transition.
bool make_not_entrant() { return make_not_entrant_or_zombie(not_entrant); }
//...
Just as a starting place.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With