I'm implementing a cache server that uses a celery task to update the cache in background. There is only one task that I call it with different arguments (cache keys).
Since after connecting this server to my main production server it will receive tens of requests per second for the same cache key I want to make sure there are never more than one of the update tasks with the same cache key inside celery queue (working as a queue and a set at the same time).
I thought of using a redis set to make sure of that before running the task but I'm looking for a better way.
If a task is revoked, the workers ignore the task and do not execute it. If you don't use persistent revokes your task can be executed after worker's restart. revoke has an terminate option which is False by default. If you need to kill the executing task you need to set terminate to True.
Process of Task Execution by Celery can be broken down into:Your application sends the tasks to the task broker, it is then reserved by a worker for execution & finally the result of task execution is stored in the result backend.
Celery task canvas Demonstration of a task which runs a startup task, then parallelizes multiple worker tasks, and then fires-off a reducer task.
Celery will stop retrying after 7 failed attempts and raise an exception.
There is only one way, implement your own lock mechanism. The official doc has a nice example page.. The only limit is your imagination.
Hope this helps.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With