We use a PPL Concurrency::TaskScheduler to dispatch events from our media pipeline to subscribed clients (typically a GUI app).
These events are C++ lambdas passed to Concurrency::TaskScheduler::ScheduleTask().
But, under load, the pipeline can generate events at a greater rate than the client can consume them.
Is there a PPL strategy I can use to cause the event dispatcher to not queue an event (in reality, a scheduled task) if the 'queue' of scheduled tasks is greater than N? And if not, how would I roll my own?
Looking at the API, it appears that there's no way to know if the scheduler is going under heavy load or not, nor is there a way to tell it how to behave in such circumstances. My understanding is that while it is possible to set limits on how many conurrent threads may run within a scheduler using policies, the protocol by which the scheduler may accept or refuse new tasks isn't clear to me.
My bet is that you will have to implement that mechanism yourself, by counting how many tasks are in the scheduler already, and have a size limited queue ahead of the scheduler which help you mitigate the flow of incoming tasks.
I suppose that you could use a simple std::queue for your lambdas, and each time you have a new event, you check how many tasks are running, and add as many from the queue as possible to reach your max running task count. If the queue is still full after that, then you refuse the new task.
To handle the running tasks accounting, you could wrap your tasks with a function decrementing the counter at completion time (use a mutex to avoid races), and increment the counter when scheduling a new task.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With