Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Blocking Locks versus Non-Blocking Locks

I am thinking here: If you have 2 threads executing FAST operations that need to be synchronized, isn't a nonblocking approach faster/better than a blocking/context switch approach?

By non-blocking I mean something like:

while(true) { if (checkAndGetTheLock()) break; }

The only thing I can think of is starvation (with CPU burn out) if you have too many threads looping around the lock.

How do I balance one approach versus the other?

like image 310
chrisapotek Avatar asked Feb 27 '12 17:02

chrisapotek


People also ask

What is deadlock and how is it different from standard block situation?

Like blocking, a deadlock involves two processes that need specific resources to complete. However, unlike blocking, the two processes are not trying to get the same resource. A deadlock occurs when Process 1 is locking Resource A and Process 2 is locking Resource B.

What is blocking and non-blocking threads?

In a blocking thread model, when the program carries out a blocking action such as IO, the OS level thread also blocks. In contrast, a non-blocking system does not block an OS thread when the thread needs to block on a blocking operation (e.g. I/O) rather it frees up the OS thread.

What is the difference between asynchronous and non-blocking?

Non-Blocking: It refers to the program that does not block the execution of further operations. Non-Blocking methods are executed asynchronously. Asynchronously means that the program may not necessarily execute line by line.


2 Answers

Here's what Java Concurrency in Practice says about the subject:

The JVM can implement blocking either via spin-waiting (repeatedly trying to acquire the lock until it succeeds) or bysuspending the blocked thread through the operating system. Which is more efficient depends on the relationship between context switch overhead and the time until the lock becomes available; spin-waiting is preferable for short waits and suspension is preferable for long waits. Some JVMs choose between the two adaptively based on profiling data of past wait times, but most just suspend threads waiting for a lock.

And also (which is, IMO, the most important point):

Don't worry excessively about the cost of uncontended synchronization. The basic mechanism is already quite fast, and JVMs can perform additional optimizations that further reduce or eliminate the cost. Instead, focus optimization efforts on areas where lock contention actually occurs.

like image 82
JB Nizet Avatar answered Nov 08 '22 23:11

JB Nizet


Well the only way to be sure is test it. When it comes to multithreading and performance you simply can't assume.

like image 28
M Platvoet Avatar answered Nov 09 '22 01:11

M Platvoet