Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Stop the thread until the celery task finishes

I have a django webserver, and a form in which the user enters information. Everytime the form information changes I update the model in my database, and at a certain point when something validates I will create a long running task in celery to get my results even before the user clicked next.

I am using Django Celery with RabbitMQ as broker, and my question is what is the most appropriate way of IN CASE the task is still not finished to just lock the response thread in django until the task is state.SUCCESSFUL I tried using the AsyncResult.get method for that, but it just locks the thread for a real long time and then gives me the result. IE It's not instant, does anyone have an idea how to solve this?

like image 886
Bojan Jovanovic Avatar asked Nov 27 '13 00:11

Bojan Jovanovic


People also ask

How do you stop the execution of a celery task?

revoke cancels the task execution. If a task is revoked, the workers ignore the task and do not execute it. If you don't use persistent revokes your task can be executed after worker's restart. revoke has an terminate option which is False by default.

What happens when a celery task fails?

Celery will stop retrying after 7 failed attempts and raise an exception.

Is Celery multithreaded?

So Celery (and other queue frameworks) has other benefits as well - Think of it as a 'task/function manager' rather then just a way of multithreading.

How does Celery execute tasks?

Process of Task Execution by Celery can be broken down into:Your application sends the tasks to the task broker, it is then reserved by a worker for execution & finally the result of task execution is stored in the result backend.


2 Answers

You can just wait until the result is ready().

from time import sleep
result = some_task.apply_async(args=myargs)
while not result.ready():
    sleep(0.5)
result_output = result.get()

It appears there is also a wait(), so you could just use that. The following should is basically doing the same thing as the code above.

result = some_task.apply_async(args=myargs)
result_output = result.wait(timeout=None, interval=0.5)
like image 145
monkut Avatar answered Nov 10 '22 16:11

monkut


one way to accomplish that would be to have the results waiting in redis, and get them using a blocking pop operation using some unique value like session id, note its timeout capability.

like image 30
Guy Gavriely Avatar answered Nov 10 '22 18:11

Guy Gavriely