I had a small dispute over performance of synchronized
block in Java. This is a theoretical question, which does not affect real life application.
Consider single-thread application, which uses locks and synchronize sections. Does this code work slower than the same code without synchronize sections? If so, why? We do not discuss concurrency, since it’s only single thread application
Update
Found interesting benchmark testing it. But it's from 2001. Things could have changed dramatically in the latest version of JDK
synchronized (lock) will take a roughly fixed amount of time per call to check that the lock is available and then acquire it, even if there are no other threads running. The amount of time will depend on your installation and hardware.
If you synchronize a code block within that method then more than one thread can execute the method simultaneously, but only one thread can enter the synchronized block at a time. From this we can conclude that synchronizing on the smallest possible code block required is the most efficient way to do it.
In simple two lines Disadvantage of synchronized methods in Java : Increase the waiting time of the thread. Create performance problem.
Furthermore, because only one thread can enter into a synchronized block or access a synchronized object, unnecessary use of synchronization can make the application slow. The larger the number of Java threads contending to execute a synchronized block or method, the worse the problem gets.
Single-threaded code will still run slower when using synchronized
blocks. Obviously you will not have other threads stalled while waiting for other threads to finish, however you will have to deal with the other effects of synchronization, namely cache coherency.
Synchronized blocks are not only used for concurrency, but also visibility. Every synchronized block is a memory barrier: the JVM is free to work on variables in registers, instead of main memory, on the assumption that multiple threads will not access that variable. Without synchronization blocks, this data could be stored in a CPU's cache and different threads on different CPUs would not see the same data. By using a synchronization block, you force the JVM to write this data to main memory for visibility to other threads.
So even though you're free from lock contention, the JVM will still have to do housekeeping in flushing data to main memory.
In addition, this has optimization constraints. The JVM is free to reorder instructions in order to provide optimization: consider a simple example:
foo++; bar++;
versus:
foo++; synchronized(obj) { bar++; }
In the first example, the compiler is free to load foo
and bar
at the same time, then increment them both, then save them both. In the second example, the compiler must perform the load/add/save on foo
, then perform the load/add/save on bar
. Thus, synchronization may impact the ability of the JRE to optimize instructions.
(An excellent book on the Java Memory Model is Brian Goetz's Java Concurrency In Practice.)
There are 3 type of locking in HotSpot
By default JVM uses thin locking. Later if JVM determines that there is no contention thin locking is converted to biased locking. Operation that changes type of the lock is rather expensive, hence JVM does not apply this optimization immediately. There is special JVM option - XX:BiasedLockingStartupDelay=delay which tells JVM when this kind of optimization should be applied.
Once biased, that thread can subsequently lock and unlock the object without resorting to expensive atomic instructions.
Answer to the question: it depends. But if biased, the single threaded code with locking and without locking has average same performance.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With