I am researching on Celery as background worker for my flask application. The application is hosted on a shared linux server (I am not very sure what this means) on Linode platform. The description says that the server has 1 CPU and 2GB RAM. I read that a Celery worker starts worker processes under it and their number is equal to number of cores on the machine - which is 1 in my case.
I would have situations where I have users asking for multiple background jobs to be run. They would all be placed in a redis/rabbitmq queue (not decided yet). So if I start Celery with concurrency greater than 1 (say --concurrency 4), then would it be of any use? Or will the other workers be useless in this case as I have a single CPU?
The tasks would mostly be about reading information to and from google sheets and application database. These interactions can get heavy at times taking about 5-15 minutes. Based on this, will the answer to the above question change as there might be times when cpu is not being utilized?
Any help on this will be great as I don't want one job to keep on waiting for the previous one to finish before it can start or will the only solution be to pay money for a better machine?
Thanks
This is a common scenario, so do not worry. If your tasks are not CPU heavy, you can always overutilise like you plan to do. If all they do is I/O, then you can pick even a higher number than 4 and it will all work just fine.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With