Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Implementation of BlockingQueue: What are the differences between SynchronousQueue and LinkedBlockingQueue

I see these implementation of BlockingQueue and can't understand the differences between them. My conclusion so far:

  1. I won't ever need SynchronousQueue
  2. LinkedBlockingQueue ensures FIFO, BlockingQueue must be created with parameter true to make it FIFO
  3. SynchronousQueue breaks most collections method (contains, size, etc)

So when do I ever need SynchronousQueue? Is the performance of this implementation better than LinkedBlockingQueue?

To make it more complicated... why does Executors.newCachedThreadPool use SynchronousQueue when the others (Executors.newSingleThreadExecutor and Executors.newFixedThreadPool) use LinkedBlockingQueue?

EDIT

The first question is solved. But I still don't understand why does Executors.newCachedThreadPool use SynchronousQueue when the others (Executors.newSingleThreadExecutor and Executors.newFixedThreadPool) use LinkedBlockingQueue?

What I get is, with SynchronousQueue, the producer will be blocked if there is no free thread. But since the number of threads is practically unlimited (new threads will be created if needed), this will never happen. So why should it uses SynchronousQueue?

like image 852
nanda Avatar asked Feb 24 '11 09:02

nanda


People also ask

What is LinkedBlockingQueue?

The LinkedBlockingQueue is an optionally-bounded blocking queue based on linked nodes. It means that the LinkedBlockingQueue can be bounded, if its capacity is given, else the LinkedBlockingQueue will be unbounded. The capacity can be given as a parameter to the constructor of LinkedBlockingQueue.

Can you give example implementations of the BlockingQueue interface?

Java provides several BlockingQueue implementations such as LinkedBlockingQueue, ArrayBlockingQueue, PriorityBlockingQueue, SynchronousQueue, etc. Java BlockingQueue interface implementations are thread-safe. All methods of BlockingQueue are atomic in nature and use internal locks or other forms of concurrency control.

What is blocking and non-blocking queue in Java?

Blocking vs Non-Blocking QueueThe producers will wait for available capacity before adding elements, while consumers will wait until the queue is empty. In those cases, the non-blocking queue will either throw an exception or return a special value, like null or false.


2 Answers

SynchronousQueue is a very special kind of queue - it implements a rendezvous approach (producer waits until consumer is ready, consumer waits until producer is ready) behind the interface of Queue.

Therefore you may need it only in the special cases when you need that particular semantics, for example, Single threading a task without queuing further requests.

Another reason for using SynchronousQueue is performance. Implementation of SynchronousQueue seems to be heavily optimized, so if you don't need anything more than a rendezvous point (as in the case of Executors.newCachedThreadPool(), where consumers are created "on-demand", so that queue items don't accumulate), you can get a performance gain by using SynchronousQueue.

Simple synthetic test shows that in a simple single producer - single consumer scenario on dual-core machine throughput of SynchronousQueue is ~20 time higher that throughput of LinkedBlockingQueue and ArrayBlockingQueue with queue length = 1. When queue length is increased, their throughput rises and almost reaches throughput of SynchronousQueue. It means that SynchronousQueue has low synchronization overhead on multi-core machines compared to other queues. But again, it matters only in specific circumstances when you need a rendezvous point disguised as Queue.

EDIT:

Here is a test:

public class Test {     static ExecutorService e = Executors.newFixedThreadPool(2);     static int N = 1000000;      public static void main(String[] args) throws Exception {             for (int i = 0; i < 10; i++) {             int length = (i == 0) ? 1 : i * 5;             System.out.print(length + "\t");             System.out.print(doTest(new LinkedBlockingQueue<Integer>(length), N) + "\t");             System.out.print(doTest(new ArrayBlockingQueue<Integer>(length), N) + "\t");             System.out.print(doTest(new SynchronousQueue<Integer>(), N));             System.out.println();         }          e.shutdown();     }      private static long doTest(final BlockingQueue<Integer> q, final int n) throws Exception {         long t = System.nanoTime();          e.submit(new Runnable() {             public void run() {                 for (int i = 0; i < n; i++)                     try { q.put(i); } catch (InterruptedException ex) {}             }         });              Long r = e.submit(new Callable<Long>() {             public Long call() {                 long sum = 0;                 for (int i = 0; i < n; i++)                     try { sum += q.take(); } catch (InterruptedException ex) {}                 return sum;             }         }).get();         t = System.nanoTime() - t;          return (long)(1000000000.0 * N / t); // Throughput, items/sec     } }     

And here is a result on my machine:

enter image description here

like image 167
axtavt Avatar answered Oct 21 '22 08:10

axtavt


Currently the default Executors (ThreadPoolExecutor based) can either use a set of pre-created threads of a fixed size and a BlockingQueue of some size for any overflow or create threads up to a max size size if (and only if) that queue is full.

This leads to some surprising properties. For instance, as additional threads are only created once the queue's capacity is reached, using a LinkedBlockingQueue (which is unbounded) means that new threads will never get created, even if the current pool size is zero. If you use an ArrayBlockingQueue then the new threads are created only if it is full, and there is a reasonable likelihood that subsequent jobs will be rejected if the pool hasn't cleared space by then.

A SynchronousQueue has zero capacity so a producer blocks until a consumer is available, or a thread is created. This means that despite the impressive looking figures produced by @axtavt a cached thread pool generally has the worst performance from the producer's point of view.

Unfortunately there isn't currently a nice library version of a compromise implementation that will create threads during bursts or activity up to some maximum from a low minimum. You either have a growable pool or a fixed one. We have one internally, but it isn't ready for public consumption yet.

like image 30
Jed Wesley-Smith Avatar answered Oct 21 '22 08:10

Jed Wesley-Smith