I'm currently using the RabbitMQ (3.6.2-1) on Ubuntu(16.04) in production. Producers publish messages and consumers consume messages and everything works correctly but sometimes RabbitMQ doesn't release memory and it touchs max memory and producers can not publish message into empty queues so I have to restart the service.
It's a bug or something else?
Update :
It is from the management plugin, so you can solve this issue by one of these solutions :
1.Update your RabbitMQ version (3.6.15 is stable)
2.Restarting statistics database periodically (in crontab hourly) https://www.rabbitmq.com/management.html#stats-db
3.set rates_mode to none in your rabbitmq.config file (it's not a good idea because in this case you can not see message rates)
Nodes hosting RabbitMQ should have at least 256 MiB of memory available at all times. Deployments that use quorum queues, Shovel and Federation may need more.
Now even if a message was persisted to disk, this doesn't mean the message got removed from RAM, since RabbitMQ keeps a cache of messages in RAM for fast access when delivering messages to consumers.
Queues are single-threaded in RabbitMQ, and one queue can handle up to about 50 thousand messages.
The RabbitMQ server detects the total amount of RAM installed on startup. By default, when the RabbitMQ server uses above 40% of the installed RAM, it raises a memory alarm and blocks all connections.
If you want RabbitMQ to be allowed to use more memory then you will want to INCREASE the value. The default memory threshold is set to 40% of installed RAM. Note that this does not prevent the RabbitMQ server from using more than 40%, it is merely the point at which publishers are throttled.
This means that memory used by message bodies is shared among processes in RabbitMQ. And this sharing also happens between queues too: if an exchange routes a message to many queues, the message body is only stored in memory once.
A RabbitMQ node can report its memory usage breakdown. The breakdown is provided as a list of categories (shown below) and the memory footprint of that category. Each category is a sum of runtime-reported memory footprint of every process or table of that kind.
Memory Alarms. The RabbitMQ server detects the total amount of RAM installed in the computer on startup and when rabbitmqctl set_vm_memory_high_watermark fraction is executed. By default, when the RabbitMQ server uses above 40% of the available RAM, it raises a memory alarm and blocks all connections that are publishing messages.
You should check the number of the messages you have inside you queues.
RabbitMQ by default keeps the messages in memory to be fast.
if you have to handle lot of messages you can use the lazy queue: https://www.rabbitmq.com/lazy-queues.html With lazy queue you can handle milions of messages wihtout impact too much the node memory.
Or it could be a mamagement memory problem, see: http://rabbitmq.com/management.html#stats-db in your canse you can run:
rabbitmqctl eval 'supervisor2:terminate_child(rabbit_mgmt_sup_sup, rabbit_mgmt_sup), rabbit_mgmt_sup_sup:start_child().'
to reset the stats and free the memory.
You could call it periodically
Note:
There are different ways to reset the stats, it depends from the rabbitmq version, here all the detail
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With