Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Is there a way to wait on multiple semaphors

I'm trying to write an application that can wait on multiple resource pools at once. Each resource pool is controlled by a Semaphor. Can I use WaitHandle.WaitAll() where I pass in the entire list of semaphors? Is there a potential deadlock issue with this implementation?

My current implementation:

namespace XXX
{
    using System.Collections.Generic;
    using System.Linq;
    using System.Threading;

    public class ResourcePoolManager
    {
        private readonly IDictionary<string, Semaphore> resourcePools = new Dictionary<string, Semaphore>();

        public void AddResourcePool(string resourceName, int maxConcurrentConsumers)
        {
            this.resourcePools.Add(resourceName, new Semaphore(maxConcurrentConsumers, maxConcurrentConsumers));
        }

        public void RequestResource(string resourceName)
        {
            this.resourcePools[resourceName].WaitOne();
        }

        public void RequestMultipleResources(string[] resourceNames)
        {
            Semaphore[] resources = resourceNames.Select(s => this.resourcePools[s]).ToArray();

            WaitHandle.WaitAll(resources);
        }

        public void ReleaseResource(string resourceName)
        {
            this.resourcePools[resourceName].Release(1);
        }
    }
}
like image 398
Orion Adrian Avatar asked Feb 28 '23 18:02

Orion Adrian


1 Answers

Sure, you can use it, but it'll only trigger if all of the semaphores are triggered at once. Depending on how the rest of your application is structured, there may indeed be starvation issues. For example, if you have two resources, A and B, and three threads:

  1. Continually take resource A, work with it for one second, then release it and loop
  2. Continually take resource B, work with it for one second, then release it and loop
  3. Wait for both A and B to be available

You can easily wait potentially forever for both A and B to be available simultaneously.

Depending on your application, it might be better to simply take each semaphore in order, which avoids this starvation issue but introduces the traditional deadlocking issues. However, if you're sure that these locks will be available most of the time it may be safe (but might also be a ticking time bomb just waiting for your application to be under real load...)

Given your sample code, another option would be to create a global ordering over the semaphores - say, an ordering by name - and always make sure to acquire them in that order. If you do that, you can perform a multi-lock simply by locking each semaphore one-by-one in ascending order.

In this case, the order of release doesn't strictly matter - but if you release out of order, you should release all locks 'after' the lock you just released before acquiring any more (this is a rule of thumb that should give you deadlock safety. it may be possible to relax this further with detailed analysis). The recommended way is to just release in reverse order of acquisition where possible, in which case you can parley it into further acquisition at any point. For example:

  1. Acquire lock A
  2. Acquire lock B
  3. Acquire lock C
  4. Release lock C
  5. Acquire lock D
  6. Release B (now don't acquire anything until you release D!)
  7. Release D
  8. Acquire E
  9. Release E
  10. Release A

As long as everything follows these rules, deadlock should not be possible, as cycles of waiters cannot form.

The downside of this approach is it may delay other threads by holding a lock while waiting for another. This won't last forever; in the above example of three threads we may have this scenario for example:

  1. At start, thread 2 holds B. Thread 1 holds A.
  2. Thread 3 blocks on A.
  3. (time passes)
  4. Thread 1 releases A.
  5. Thread 3 locks A, blocks on B.
  6. Thread 1 blocks on A.
  7. (time passes)
  8. Thread 2 releases B.
  9. Thread 3 locks B, does work, then unlocks.
  10. Thread 1 locks A, makes progress.

As you can see, there was some downtime in which Thread 1 was blocked on A even though there was no real work to be done. However, by doing this we have greatly improved the chances for Thread 3 to make progress at all.

Whether this is a good trade-off will depend on your application - if you can definitively say that multiple threads never enter the lock, it might not even matter. But there's no one right way :)

like image 127
bdonlan Avatar answered Mar 07 '23 14:03

bdonlan