For a task like this:
from celery.decorators import task @task() def add(x, y): if not x or not y: raise Exception("test error") return self.wait_until_server_responds(
if it throws an exception and I want to retry it from the daemon side, how can apply an exponential back off algorithm, i.e. after 2^2, 2^3,2^4
etc seconds?
Also is the retry maintained from the server side, such that if the worker happens to get killed then next worker that spawns will take the retry task?
retry method to make it work. By setting the countdown argument to 5, the task will retry after a 5 second delay.
The "shared_task" decorator allows creation of Celery tasks for reusable apps as it doesn't need the instance of the Celery app. It is also easier way to define a task as you don't need to import the Celery app instance.
The bind argument means that the function will be a “bound method” so that you can access attributes and methods on the task type instance.
celery beat is a scheduler; It kicks off tasks at regular intervals, that are then executed by available worker nodes in the cluster. By default the entries are taken from the beat_schedule setting, but custom stores can also be used, like storing the entries in a SQL database.
The task.request.retries
attribute contains the number of tries so far, so you can use this to implement exponential back-off:
from celery.task import task @task(bind=True, max_retries=3) def update_status(self, auth, status): try: Twitter(auth).update_status(status) except Twitter.WhaleFail as exc: raise self.retry(exc=exc, countdown=2 ** self.request.retries)
To prevent a Thundering Herd Problem, you may consider adding a random jitter to your exponential backoff:
import random self.retry(exc=exc, countdown=int(random.uniform(2, 4) ** self.request.retries))
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With