Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

What is meant by "fast-path" uncontended synchronization?

From the Performance and Scalability chapter of the JCIP book:

The synchronized mechanism is optimized for the uncontended case(volatile is always uncontended), and at this writing, the performance cost of a "fast-path" uncontended synchronization ranges from 20 to 250 clock cycles for most systems.

What does the author mean by fast-path uncontended synchronization here?

like image 827
Geek Avatar asked Sep 01 '13 14:09

Geek


3 Answers

There are two distinct concepts here.

  1. Fast-path and Slow-path code
  2. Uncontended and Contended synchronization

Slow-path vs Fast-path code

This is another way to identify the producer of the machine specific binary code.

With HotSpot VM, slow-path code is binary code produced by a C++ implementation, where fast-path code means code produced by JIT compiler.

In general sense, fast-path code is a lot more optimised. To fully understand JIT compilers wikipedia is a good place to start.

Uncontended and Contended synchronization

Java's synchronization construct (Monitors) have the concept of ownership. When a thread tries to lock (gain the ownership of) the monitor, it can either be locked (owned by another thread) or unlocked.

Uncontended synchronization happens in two different scenarios:

  1. Unlocked Monitor (ownership gained strait away)
  2. Monitor already owned by the same thread

Contended synchronization, on the other hand, means the thread will be blocked until the owner thread release the monitor lock.

Answering the question

By fast-path uncontended synchronization the author means, the fastest bytecode translation (fast-path) in the cheapest scenario (uncontended synchronization).

like image 83
João Melo Avatar answered Nov 16 '22 12:11

João Melo


I'm not familiar with the topic of the book, but in general a “fast path” is a specific possible control flow branch which is significantly more efficient than the others and therefore preferred, but cannot handle complex cases.

I assume that the book is talking about Java's synchronized block/qualifier. In this case, the fast path is most likely one where it is easy to detect that there are no other threads accessing the same data. What the book is saying, then, is that the implementation of synchronized has been optimized to have the best performance in the case where only one thread is actually using the object, as opposed to the case where multiple threads are and the synchronization must actually mediate among them.

like image 28
Kevin Reid Avatar answered Nov 16 '22 13:11

Kevin Reid


The first step of acquiring a synchronized lock is a single volatile write (monitor owner field). If the lock is uncontested then that is all which will happen.

If the lock is contested then there will be various context switches and other mechanisms which will increase clock cycles.

like image 2
John Vint Avatar answered Nov 16 '22 12:11

John Vint