Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Problems stopping celeryd

I'm running celeryd as a daemon, but I sometimes have trouble stopping it gracefully. When I send the TERM signal and there are items in the queue (in this case service celeryd stop) celeryd will stop taking new jobs, and shut down all the worker processes. The parent process, however, won't shut down.

I've just ran into a scenario where I had celeryd running on two separate worker machines: A and B. With about 1000 messages on the RabbitMQ server, I shut down A, and experienced the situation I've explained above. B continued to work, but then stalled with about 40 messages left on the server. I was however, able to stop B correctly.

I restarted B, to see if it would take the 40 items off the queue, but it would not. Next, I hard killed A, after which B grabbed and completed the tasks.

My conclusions is that the parent process has reserved the 40 items from our RabbitMQ server for its children. It will reap the children correctly, but will not release the items back to RabbitMQ unless I manually kill it.

Has anyone experienced something similar?

I'm running Celery 2.2.2

like image 808
Zach Avatar asked Feb 18 '11 19:02

Zach


1 Answers

I believe this is related to:

https://github.com/celery/celery/issues/264

Setting

CELERY_DISABLE_RATE_LIMITS = False

in your settings.py file should work.

like image 128
Adam Nelson Avatar answered Nov 08 '22 18:11

Adam Nelson