I am using Fabric to deploy a Celery broker (running RabbitMQ) and multiple Celery workers with celeryd
daemonized through supervisor
. I cannot for the life of me figure out how to reload the tasks.py
module short of rebooting the servers.
/etc/supervisor/conf.d/celeryd.conf
[program:celeryd]
directory=/fab-mrv/celeryd
environment=[RABBITMQ crendentials here]
command=xvfb-run celeryd --loglevel=INFO --autoreload
autostart=true
autorestart=true
celeryconfig.py
import os
## Broker settings
BROKER_URL = "amqp://%s:%s@hostname" % (os.environ["RMQU"], os.environ["RMQP"])
# List of modules to import when celery starts.
CELERY_IMPORTS = ("tasks", )
## Using the database to store task state and results.
CELERY_RESULT_BACKEND = "amqp"
CELERYD_POOL_RESTARTS = True
Additional information
celery --version
3.0.19 (Chiastic Slide)python --version
2.7.3lsb_release -a
Ubuntu 12.04.2 LTSrabbitmqctl status
... 2.7.1 ...Here are some things I have tried:
celeryd --autoreload
flagsudo supervisorctl restart celeryd
celery.control.broadcast('pool_restart', arguments={'reload': True})
ps auxww | grep celeryd | grep -v grep | awk '{print $2}' | xargs kill -HUP
And unfortunately, nothing causes the workers to reload the tasks.py module (e.g. after running git pull
to update the file). The gist of the relevant fab functions is available here.
The brokers/workers run fine after a reboot.
Create a seperate management command called celery. Write a function to kill existing worker and start a new worker. Now hook this function to autoreload as follows. Now you can run celery worker with python manage.py celery which will autoreload when codebase changes.
Specifically, Redis is used to store messages produced by the application code describing the work to be done in the Celery task queue. Redis also serves as storage of results coming off the celery queues which are then retrieved by consumers of the queue.
Celery communicates via messages, usually using a broker to mediate between clients and workers. To initiate a task the client adds a message to the queue, the broker then delivers that message to a worker.
Just a shot in the dark, with the celeryd --autoreload
option did you make sure you have one of the file system notification backends? It recommends PyNotify for linux, so I'd start by making sure you have that installed.
I faced a similar problem and was able to use Watchdog to reload the tasks.py
tasks modules when there are changes detected. To install:
pip install watchdog
You can programmatically use the Watchdog API, for example, to monitor for directory changes in the file system. Additionally Watchdog provides an optional shell utility called watchmedo
that can be used to execute commands on event. Here is an example that starts the Celery worker via Watchdog and reloads on any changes to .py
files including changes via git pull
:
watchmedo auto-restart --directory=./ --pattern="*.py" --recursive -- celery worker --app=worker.app --concurrency=1 --loglevel=INFO
Using Watchdog's watchmedo
I was able to git pull
changes and the respective tasks.py
modules were auto reloaded without any reboot of the container or server.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With