My celery tasks stops getting executed in between. My rabbitmq stops in between and then I need to restart it manually. Last time(15-16 hours back), similar problem occurred, I did the following (manually), and it started working again.
I reinstalled the rabbitmq and then it started working again.
sudo apt-get --purge remove raabitmq-server
sudo apt-get install raabitmq-server
Now it is again showing `
Celery - errno 111 connection refused
Following is my config.
BROKER_URL = 'amqp://' CELERY_RESULT_BACKEND = 'amqp://' CELERY_TASK_SERIALIZER = 'json' CELERY_RESULT_SERIALIZER = 'json' CELERY_ACCEPT_CONTENT=['json'] CELERY_TIMEZONE = 'Europe/Oslo' CELERY_ENABLE_UTC = True CELERY_CREATE_MISSING_QUEUES = True
Please let me know where I'm going wrong?
How should I rectify it?
Part2
Also, I've multiple queues. I can run it within the project directory, but when demonizing, the workers dont take task. I still need to start the celery workers manually. How can I demozize it?
Here is my celerd conf.
# Name of nodes to start, here we have a single node CELERYD_NODES="w1 w2 w3 w4" CELERY_BIN="/usr/local/bin/celery" # Where to chdir at start. CELERYD_CHDIR="/var/www/fractal/parser-quicklook/" # Python interpreter from environment, if using virtualenv #ENV_PYTHON="/somewhere/.virtualenvs/MyProject/bin/python" # How to call "manage.py celeryd_multi" #CELERYD_MULTI="/usr/local/bin/celeryd-multi" # How to call "manage.py celeryctl" #CELERYCTL="/usr/local/bin/celeryctl" #CELERYBEAT="/usr/local/bin/celerybeat" # Extra arguments to celeryd CELERYD_OPTS="--time-limit=300 --concurrency=8 -Q BBC,BGR,FASTCOMPANY,Firstpost,Guardian,IBNLIVE,LIVEMINT,Mashable,NDTV,Pandodaily,Reuters,TNW,TheHindu,ZEENEWS " # Name of the celery config module, don't change this. CELERY_CONFIG_MODULE="celeryconfig" # %n will be replaced with the nodename. CELERYD_LOG_FILE="/var/log/celery/%n.log" CELERYD_PID_FILE="/var/run/celery/%n.pid" # Workers should run as an unprivileged user. #CELERYD_USER="nobody" #CELERYD_GROUP="nobody" # Set any other env vars here too! PROJET_ENV="PRODUCTION" # Name of the projects settings module. # in this case is just settings and not the full path because it will change the dir to # the project folder first. CELERY_CREATE_DIRS=1
Celeryconfig is already provided in part1.
Here is my proj directory structure.
project |-- main.py |-- project | |-- celeryconfig.py | |-- __init__.py |-- tasks.py
How can I demonize with the Queues? I have provided the queues in CELERYD_OPTS
as well.
Is there a way in which we can dynamically demonize the number of queues in the celery? For eg:- we have CELERY_CREATE_MISSING_QUEUES = True
for creating the missing queues. Is there something similar to daemonize the celery queues?
This error means that the client cannot connect to the port on the computer running server script. This can be caused by few things, like lack of routing to the destination or you have a firewall somewhere between your client and the server - it could be on server itself or on the client etc.
celery -A yourproject. app inspect status will give the status of your workers. celery -A yourproject. app inspect active will give you list of tasks currently running, etc.
Redis is the datastore and message broker between Celery and Django. In other words, Django and Celery use Redis to communicate with each other (instead of a SQL database). Redis can also be used as a cache as well. An alternative for Django & Celery is RabbitMQ (not covered here).
not sure if you fixed this already, but from the look of it, seems you have a bunch of problems.
first and foremost, check if your RabbitMQ server has troubles staying up for some reason.
/var/log/syslog
might be a good place to start). You don't say anything about your server OS, but assuming it's Debian/Ubuntu because you mention apt-get, here's a list of OS log locations that might help: https://help.ubuntu.com/community/LinuxLogFiles also, be sure that your RabbitMQ server has been configured with the correct credentials and allow access from your worker's location (e.g. enable connections other than loopback users): here's what you need to do: https://www.rabbitmq.com/access-control.html
then, check you have configured your worker with the correct authentication credentials, a full URL should look similar to the following (where user must be granted access to the specific virtualhost, it's quite easy to configure it via the RabbitMQ management interface https://www.rabbitmq.com/management.html):
BROKER_URL = 'amqp://user:pass@host:port/virtualhost' CELERY_RESULT_BACKEND = 'amqp://user:pass@host:port/virtualhost'
and finally, try to traceback the exception in python, that should hopefully give you some additional information about the error
hth
p.s. re. demonizing your celery worker, @budulianin answer is spot on!
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With