Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Temporary queue made in Celery

Tags:

I am using Celery with RabbitMQ. Lately, I have noticed that a large number of temporary queues are getting made.

So, I experimented and found that when a task fails (that is a tasks raises an Exception), then a temporary queue with a random name (like c76861943b0a4f3aaa6a99a6db06952c) is formed and the queue remains.

Some properties of the temporary queue as found in rabbitmqadmin are as follows -

auto_delete : True consumers : 0 durable : False messages : 1 messages_ready : 1

And one such temporary queue is made everytime a task fails (that is, raises an Exception). How to avoid this situation? Because in my production environment a large number of such queues get formed.

like image 855
Siddharth Avatar asked Aug 22 '11 06:08

Siddharth


People also ask

Is Celery a message queue?

Celery is an open source asynchronous task queue or job queue which is based on distributed message passing. While it supports scheduling, its focus is on operations in real time.

What is the default queue in Celery?

By default, Celery routes all tasks to a single queue and all workers consume this default queue. With Celery queues, you can control which Celery workers process which tasks. This can be useful if you have a slow and a fast task and you want the slow tasks not to interfere with the fast tasks.

How do Celery queues work?

Celery communicates via messages, usually using a broker to mediate between clients and workers. To initiate a task, the Celery client adds a message to the queue, and the broker then delivers that message to a worker. The most commonly used brokers are Redis and RabbitMQ.

How does Celery backend work?

The Results Backend delay places the task in the queue and returns a promise that can be used to monitor the status and get the result when it's ready. Calling get in that promise will block the execution until the result is available.


1 Answers

It sounds like you're using the amqp as the results backend. From the docs here are the pitfalls of using that particular setup:

  • Every new task creates a new queue on the server, with thousands of tasks the broker may be overloaded with queues and this will affect
    performance in negative ways. If you’re using RabbitMQ then each
    queue will be a separate Erlang process, so if you’re planning to
    keep many results simultaneously you may have to increase the Erlang
    process limit, and the maximum number of file descriptors your OS
    allows
  • Old results will not be cleaned automatically, so you must make sure to consume the results or else the number of queues will eventually go out of control. If you’re running RabbitMQ 2.1.1 or higher you can take advantage of the x-expires argument to queues, which will expire queues after a certain time limit after they are unused. The queue expiry can be set (in seconds) by the CELERY_AMQP_TASK_RESULT_EXPIRES setting (not enabled by default).

From what I've read in the changelog, this is no longer the default backend in versions >=2.3.0 because users were getting bit in the rear end by this behavior. I'd suggest changing the results backend if this not the functionality you need.

like image 183
Philip Southam Avatar answered Oct 27 '22 01:10

Philip Southam