I'm trying to get celery's official tutorial work but kept getting this error:
D:\test>celery -A tasks worker --loglevel=info
-------------- celery@BLR122S v3.0.17 (Chiastic Slide)
---- **** -----
--- * * * -- [Configuration]
-- * - **** --- . broker: amqp://guest@localhost:5672//
- ** ---------- . app: tasks:0x2a76850
- ** ---------- . concurrency: 2 (processes)
- ** ---------- . events: OFF (enable -E to monitor this worker)
- ** ----------
- * --- * --- [Queues]
-- ******* ---- . celery: exchange:celery(direct) binding:celery
--- ***** -----
[Tasks]
. tasks.add
[2013-03-29 17:50:52,533: WARNING/MainProcess] celery@BLR122S ready.
[2013-03-29 17:50:52,568: INFO/MainProcess] consumer: Connected to amqp://guest@ 127.0.0.1:5672//.
[2013-03-29 17:51:32,496: INFO/MainProcess] Got task from broker: tasks.add[8345 9233-ce54-40ed-a2a8-ee0d60768006]
[2013-03-29 17:51:32,562: ERROR/MainProcess] Task tasks.add[83459233-ce54-40ed-a 2a8-ee0d60768006] raised exception: Task of kind 'tasks.add' is not registered, please make sure it's imported.
Traceback (most recent call last):File "C:\Python27\lib\site-packages\billiard\pool.py", line 293, in worker
result = (True, func(*args, **kwds))
File "C:\Python27\lib\site-packages\celery\task\trace.py", line 320, in _fast_trace_task
return _tasks[task].__trace__(uuid, args, kwargs, request)[0]
File "C:\Python27\lib\site-packages\celery\app\registry.py", line 20, in __missing__
raise self.NotRegistered(key)
NotRegistered: 'tasks.add'
I installed celery==3.0.17 and rabbitMQ.
Then start celery by "D:\test>celery -A tasks worker --loglevel=info
"tasks.add
seems to be in [Tasks], but calling by:
>>> from tasks import add
>>> add.delay(1,1)
# Out: AsyncResult: 83459233-ce54-40ed-a2a8-ee0d60768006
got the failure above. Does anyone have the same problem?
Edit: Here is my tasks.py copying from tutorial.
from celery import Celery
celery = Celery('tasks', broker='amqp://guest@localhost//')
@celery.task
def add(x, y):
return x + y
Celery will stop retrying after 7 failed attempts and raise an exception.
The prefetch limit is a limit for the number of tasks (messages) a worker can reserve for itself. If it is zero, the worker will keep consuming messages, not respecting that there may be other available worker nodes that may be able to process them sooner [†], or that the messages may not even fit in memory.
Following Will solve your Problem
from tasks import add
res = add.delay(1,2) #call to add
res.get() #get result
Restart your worker after changes using
celery -A tasks worker --loglevel=info
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With