I was reading about the new concurrent collection classes in .NET 4 on James Michael Hare's blog, and the page talking about ConcurrentQueue<T>
says:
It’s still recommended, however, that for empty checks you call IsEmpty instead of comparing Count to zero.
I'm curious - if there is a reason to use IsEmpty instead of comparing Count to 0, why does the class not internally check IsEmpty and return 0 before doing any of the expensive work to count?
E.g.:
public int Count
{
get
{
// Check IsEmpty so we can bail out quicker
if (this.IsEmpty)
return 0;
// Rest of "expensive" counting code
}
}
It seems strange to suggest this if it could be "fixed" so easily with no side-effects?
ConcurrentQueue<T>
is lock-free and uses spin waits to achieve high-performance concurrent access. The implementation simply requires that more work be done in order to return the exact count than to check if there are no items, which is why IsEmpty
is recommended.
Intuitively, you can think of Count
having to wait for a timeslice when no other clients are updating the queue, in order to take a snapshot and then count the items in that snapshot. IsEmpty
simply has to check if there is at least one item or not. Concurrent Enqueue
and TryDequeue
operations are changing the count, so Count
has to retry; unless the queue is transitioning between the empty and non-empty states, the return value of IsEmpty
isn't changed by concurrent operations, so it doesn't have to wait.
I wrote a simple multi-threaded test app which showed that Count
was ~20% slower (with both constant contention and no contention); however, both properties can be called millions of times per second so any performance difference is likely to be completely negligible in practice.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With