I think I shall reframe my question from
Where should you use BlockingQueue Implementations instead of Simple Queue Implementations ?
to
What are the advantages/disadvantages of BlockingQueue over Queue implementations taking into consideration aspects like speed,concurrency or other properties which vary e.g. time to access last element.
I have used both kind of Queues. I know that Blocking Queue is normally used in concurrent application. I was writing simple ByteBuffer pool where I needed some placeholder for ByteBuffer objects. I needed fastest , thread safe queue implementation. Even there are List implementations like ArrayList which has constant access time for elements.
Can anyone discuss about pros and cons of BlockingQueue vs Queue vs List implementations?
Currently I have used ArrayList to hold these ByteBuffer objects.
Which data structure shall I use to hold these objects?
BlockingQueue is a java Queue that support operations that wait for the queue to become non-empty when retrieving and removing an element, and wait for space to become available in the queue when adding an element.
A blocking queue is a queue which provides insert and remove operations that block or keep waiting until they are performed. The blocking queues are usually used in Producer-Consumer frameworks. This interface extends Queue and exists since Java 5. Null elements are not allowed.
Java provides several BlockingQueue implementations such as LinkedBlockingQueue, ArrayBlockingQueue, PriorityBlockingQueue, SynchronousQueue, etc. Java BlockingQueue interface implementations are thread-safe. All methods of BlockingQueue are atomic in nature and use internal locks or other forms of concurrency control.
BlockingQueue is a queue that additionally supports operations that wait for the queue to become non-empty when we are trying to retrieve an element, and wait for the space to be empty when an element is to be inserted in the queue.
A limited capacity BlockingQueue
is also helpful if you want to throttle some sort of request. With an unbounded queue, a producers can get far ahead of the consumers. The tasks will eventually be performed (unless there are so many that they cause an OutOfMemoryError
), but the producer may long since have given up, so the effort is wasted.
In situations like these, it may be better to signal a would-be producer that the queue is full, and to give up quickly with a failure. For example, the producer might be a web request, with a user that doesn't want to wait too long, and even though it won't consume many CPU cycles while waiting, it is using up limited resources like a socket and some memory. Giving up will give the tasks that have been queued already a better chance to finish in a timely manner.
Regarding the amended question, which I'm interpreting as, "What is a good collection for holding objects in a pool?"
An unbounded LinkedBlockingQueue
is a good choice for many pools. However, depending on your pool management strategy, a ConcurrentLinkedQueue
may work too.
In a pooling application, a blocking "put" is not appropriate. Controlling the maximum size of the queue is the job of the pool manager—it decides when to create or destroy resources for the pool. Clients of the pool borrow and return resources from the pool. Adding a new object, or returning a previously borrowed object to the pool should be fast, non-blocking operations. So, a bounded capacity queue is not a good choice for pools.
On the other hand, when retrieving an object from the pool, most applications want to wait until a resource is available. A "take" operation that blocks, at least temporarily, is much more efficient than a "busy wait"—repeatedly polling until a resource is available. The LinkedBlockingQueue
is a good choice in this case. A borrower can block indefinitely with take
, or limit the time it is willing to block with poll
.
A less common case in when a client is not willing to block at all, but has the ability to create a resource for itself if the pool is empty. In that case, a ConcurrentLinkedQueue
is a good choice. This is sort of a gray area where it would be nice to share a resource (e.g., memory) as much as possible, but speed is even more important. In the worse case, this degenerates to every thread having its own instance of the resource; then it would have been more efficient not to bother trying to share among threads.
Both of these collections give good performance and ease of use in a concurrent application. For non-concurrent applications, an ArrayList
is hard to beat. Even for collections that grow dynamically, the per-element overhead of a LinkedList
allows an ArrayList
with some empty slots to stay competitive memory-wise.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With