Using Celery I want to write a task like this:
@celery.task
def add_task():
....
if(condition):
add_task.apply_async(queue="default")
I know that in python, there is a maximum depth when you call a recursive function. Is this constrain applying also in celery?
Celery beat supports four different ways to define a recurring task. regular (time) interval of the recurring task: e.g. checking the status of a sensor once every 10 seconds crontab schedule: e.g. generating a sales report and sending it to all stakeholders via email every day after midnight
For development docs, go here . Tasks are the building blocks of Celery applications. A task is a class that can be created out of any callable. It performs dual roles in that it defines both what happens when a task is called (sends a message), and what happens when a worker receives that message.
The API reference for Task. Celery can keep track of the tasks current state. The state also contains the result of a successful task, or the exception and traceback information of a failed task. There are several result backends to choose from, and they all have different strengths and weaknesses (see Result Backends ).
The registry contains a list of task names and their task classes. You can investigate this registry yourself: This is the list of tasks built into Celery. Note that tasks will only be registered when the module they’re defined in is imported. The default loader imports any modules listed in the imports setting.
There shouldn't be any problem with that.
However, if add_task
depends on the results of the subtask, you might run into an issue where you run out of workers, but it doesn't seem like that from your small snippet. Technically, there is a limit to how many tasks you can queue because you will eventually run out of memory.
You're better off just trying it out to see what happens!
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With