Let's say I have a task provider - readable channel, it may or may not provide a task(depends on work load) Specific is so that there may be no work for hours and then there can be a sudden bump in tasks
I want to let my goroutine pool grow from 1 to N where N is the max concurrency when work appears and then automatically collapse down to 1 where there was no work for a goroutine for longer than X sec to avoid memory/cpu waste.
I could have used just a fixed pool, as goroutines are dirt cheap, but I don't like an idea of having thousands of idling goroutines I may have a better use for those resources(should be mostly ram but still)
Collapsing part is rather easy
for {
timeoutTimer := time.NewTimer(WORKER_ROUTINE_TIMEOUT)
select {
case taskContext, isBatchRunning := <-runner.tasksCh:
if !isBatchRunning {
log.Print("task provider is closed, quit worker goroutine")
return
}
runner.job.Process(&taskContext)
case <-timeoutTimer.C:
return
}
}
But i'm not sure how to make pool grow dynamically, i.e. on which condition spawn a new one
A priority for this pool is being able to react fast on increased loading and expand up to N(max concurrency) goroutines, with ability to eventually collapse to a more reasonable numbers(1 at min) when work load decreased
P.S. I saw a https://github.com/Jeffail/tunny package, but it looks like it does not have anything similar to adaptive scaling of the current pool size. Am I missing something?
Thanks!
Well, I'm not sure that a pool is what you need. Goroutines are fast to launch, and you probably don't need to keep them ready all the time.
For this task I would use a simple semaphore. It's really easy to implement a semaphore in Go using channels. You can see my personal example here.
You just create a semaphore with a desired capacity (which will be your maximum number of allowed goroutines) and then:
Simple as that. And don't worry about the need to launch them on demand; it is really over-optimizing.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With