The Java Virtual Machine Specification states that 8-byte (such as long
and double
) constants take up two entries in the constant_pool
table, unlike other constants that take up only one entry each. The specification also mentions that it was a poor choice but doesn't explain why.
What was the original reason behind this design decision and what were the benefits at the time?
A definitive answer would require talking to someone involved in the early development of Java. However, I think it is pretty clear that the bytecode format was originally designed with the performance of a naive interpreter in mind.
Consider how you would write a very simple Java bytecode interpeter. There's no JIT, no optimization, etc. You just execute each instruction as you get to it. Assuming the constant pool has been decoded to a table of 32 bit values at load time, an instruction like ldc2_w x, referencing the constant pool, would execute C code along the lines of
*(*int64)(stack_ptr += 8) = *(*int64)(constant_pool_ptr + x * 4)
Basically, if you are on a 32 bit machine, and are translating everything to raw pointer accesses with no optimization, then using two slots for 64 bit values is simply the logical way to implement things.
The reason it is a poor choice today is because nowadays, interpreters aren't completely unoptimized like this. In fact, code is usually JITed. Furthermore, 64 bit platforms are the norm, which means that reference types take up 64 bits anyway*, even though the specification treats them like 32 bit values. Therefore, there is no longer any benefit to this hack, but we still pay the cost in specification and implementation complexity.
^ Theoretically at least. The JVM uses 32 bit compressed pointers by default, even on 64 bit platforms, to reduce memory usage.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With