In the famous Java Concurrency in Practice, section 2.4, it says that intrinsic locking approach, as against explicit locks was a bad design decision as its confusing and also "...it forces JVM implementors to make tradeoffs between object size and locking performance." Can someone please explain how object size effects locking performance?
Well since every object can be locked, this means that every object has to have enough place to store all the information we need when locking.
That's rather unappealing because the vast, vast majority of objects will never be locked so we're wasting lots of space. So in practice Hotspot solves this by using 2bits to record the state of the object and reusing the rest of the object header depending on these two bits.
Then there's the whole biased/non-biased locking stuff.. well you can start reading about it here. Hotspot documentation is not what I'd call extensive, but locking and object headers are better represented than most of the rest. But in doubt: Read the source code.
PS: We have a similar problem with the native hashcode of every object too. "Just use the memory address" isn't much good if your GC shuffles objects around. (But contrary to locking there's no real alternative - if we want this functionality)
The most efficient locks use native word size e.g. 32-bit fields. However you don't want to be adding 4 bytes to every object so instead AFAIK 1 bit is used, however setting this bit is more expensive that setting a word size.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With