I've just read a paper about the Leader/Follower Pattern and if I understood correctly, I keep my workers in a queue and the first worker takes an incoming request and detaches from the queue.
With a normal work-queue (rabbitmq and beanstalkd, for example) its the other way round: i keep my jobs in a queue and once a worker finishes processing it just takes the first job from the queue.
Is there somehing I'm missing?
So, what are the advantages I should use a Leader/Follower approach instead of a work queue? Or the other way round, in what situations is a work queue suited better?
Bye, Nico
Leader/Follower is about efficiently dealing with multiple workers. When you have no work (jobs), what are your worker or workers doing? A common, simple approach is to have a single consumer thread dispatch jobs to workers, either by spawning a thread or using a thread pool. The pattern discussed provides an alternative approach that avoid having to synchronize between the dispatcher and the worker by having the (leader) thread that gets the job execute the work task itself. It promotes a waiting worker to the leader position to keep the system responsive.
Be aware that this article is discussing lower-level mechanisms for waiting for work that do not (easily) support multiple threads waiting on the same "queue" of work. Higher-level constructs like message queues that do support multiple worker threads all performing a blocking read on the same source (AKA competing consumers) may not get the same benefit described. With a higher level of abstraction comes more programming ease, but typically at the cost of the kind of performance that can be gained from a more low-level approach.
EDIT1:
Here's a made-up sample (pseudocode only). Please note that I did not write the article or benchmark it so I cannot truly speak about the performance of one versus the other. But hopefully, this shows the difference in style.
// in QueueHandler processing loop
while(true)
{
// read, blocking until one arrives
Request req = requestQueue.BlockingRead();
// we have a unit of work now but the QueueHandler should not process it
// because if it is long running then no new requests can be handled.
// so we spawn / dispatch to a thread
ThreadPool.QueueWorkItem(req);
// or new Thread(DoWork(), req).Start;
// at this point we know that the request will get picked up in
// an unknown but hopefully very short amount of time by a
// waiting (sleeping/blocking) or new thread and it will get passed the
// work. But doing so required the use of thread synchronization
// primitives that can cause all processors to flush their caches
// and other expensive stuff.
} // now loop back up to read the next request
VS
// in Leader
while(true)
{
// I'm the leader, blocking read until a request arrives
Request req = queue.BlockingRead();
// We have a unit of work and we are going to process it ourselves.
// But first we notify a follower.
Followers.PromoteOne();
// work on the request in this thread!
DoWorkOn(req);
// now that I'm done, wait to the the leader
Followers.BlockingWaitToBeLeader();
}
First of all with work queues you need locks on the work queues. Second, and that's the main issue, with work queues you have to wake up a worker thread and that thread won't process the work until the system task scheduler actually runs the task. Which get's worse when you process the work item with a different processor core than the one filling the queue. So you can achieve much lower latencies with a leader follower pattern.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With