Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Does using a lock have better performance than using a local (single application) semaphore? [closed]

Does using a lock have better performance than using a local (single application) semaphore?

I read this blog from msdn : Producer consumer solution on msdn

and I didn't like their solution to the problem because there are always 20 elements left in the queue.

So instead, I thought about using a 'Semaphore' that will be available only in my app (I just won't name it in the constructor), but I don't know how it will effect the app's performance.

Does anyone have an idea if it'll affect the performance? What are the other considerations to use a lock and not 'Semaphore'?

like image 918
Adibe7 Avatar asked Aug 15 '10 21:08

Adibe7


4 Answers

Lock(obj) is the same as Monitor.Enter(obj); A lock is basicaly an unary semaphore. If you have a number of instances of the same ressource (N) you use a semaphore with the initialization value N. A lock is mainly used to ensure that a code section is not executed by two threads at the same time.

So a lock can be implemented using a semaphore with initialization value of 1. I guess that Monitor.Enter is more performant here but I have no real information about that. A test will be of help here. Here is a SO thread that handels about performance.

For your problem a blocking queue would be the solution. (producer consumer) I suggest this very good SO thread.

Here is another good source of information about Reusable Parallel Data Structures.

like image 55
schoetbi Avatar answered Sep 20 '22 16:09

schoetbi


TLDR I just ran my own benchmark and in my setup, it seems that lock is running almost twice as fast as SemaphoreSlim(1).

Specs:

  • .NET Core 2.1.5
  • Windows 10
  • 2 physical cores (4 logical) @2.5 GHz

The test:

I tried running 2, 4 and 6 Tasks in parallel, each of them doing 1M of operations of accessing a lock, doing a trivial operation and releasing it. The code looks as follows:

await semaphoreSlim1.WaitAsync();
// other case: lock(obj) {...}

if(1 + 1 == 2)
{
    count++;
}        

semaphoreSlim1.Release();

Results For each case, lock ran almost twice as fast as SemaphoreSlim(1) (e.g. 205ms vs 390ms, using 6 parallel tasks).

Please note, I do not claim that it is any faster on an infinite number of other setups.

like image 42
eddyP23 Avatar answered Sep 19 '22 16:09

eddyP23


In general: If your consumer thread manages to process each data item quickly enough, then the kernel-mode transition will incur a (possibly significant) bit of overhead. In that case a user-mode wrapper which spins for a while before waiting on the semaphore will avoid some of that overhead.

A monitor (with mutual exclusion + condition variable) may or may not implement spinning. That MSDN article's implementation didn't, so in this case there's no real difference in performance. Anyway, you're still going to have to lock in order to dequeue items, unless you're using a lock-free queue.

like image 28
wj32 Avatar answered Sep 22 '22 16:09

wj32


The solution in the MSDN article has a bug where you'll miss an event if SetEvent is called twice by the producer in quick succession whilst the consumer is processing the last item it retrieves from the queue.

Have a look at this article for a different implementation using Monitor instead:

http://wekempf.spaces.live.com/blog/cns!D18C3EC06EA971CF!672.entry

like image 29
theburningmonk Avatar answered Sep 19 '22 16:09

theburningmonk