I have a requirement where I should be able to run 100 sidekiq jobs per second. I increased my server capacity to 8CPUs and created 4 sidekiq processes but still its serving 50 jobs per minute. I am not sure where I am missing
I got a solution for the issue hence answering my question.
I am using sidekiq 4. According to the sidekiq documentation, this version of sidekiq is able to run 800 jobs per second.
So I have just written a dummy worker with no logic and created around 100k jobs. These jobs ran at the rate of 666 jobs per second.
So I came to the conclusion that it's not sidekiq configuration hitting the performance it's my worker logic which is taking more time to execute the job.
I started optimizing the sidekiq worker logic and reduced its execution time. It worked out for me 😎
Now I could able to run 30 jobs per second for 1 sidekiq process. If we increase the sidekiq process then the number of jobs per second also increases.
It's like (number of jobs per second)*(number of sidekiq processes).
Finally, always create lightweight workers which significantly improves the sidekiq performance Then go for Increasing the server capacity
Please provide some detail on what you are currently doing, but without knowing what you are doing already:
[1] Use connection pooling which caches the database connections:
Sidekiq.redis do |conn|
conn.pipelined do
# do stuff
end
end
This should significantly reduce processing time on queues.
[2] You could always use 'push_bulk' like so:
args = model.map {|uid| [uid] }
Sidekiq::Client.push_bulk('class' => YourWorker, 'args' => args)
This removes the redis round trip latency. It takes the same args as 'push,' but expects an array of arrays.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With