Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

In ActiveMQ Is it possible to limit the memory required for an open consumer transaction?

Tags:

java

activemq

I am currently using ActiveMQ 5.7.0 and KahaDB but am open to upgrading if necessary. I use an embedded broker and am experienced creating plugins and controlling the broker configuration programmatically.

In my application I am creating a consumer on a transacted session. The consumer is transferring data to another service (not ActiveMQ). This service allows me to commit work but these commits can be expensive. I have discovered that uncommitted messages consumed from the session are held in the broker's memory. This is requiring me to commit more often than I would like in order to free up broker memory resources. Ideally I would like to control when the commits occur without having to consider ActiveMQ memory utilization.

My current algorithm is:

  • receive message
  • translate msg and clear the body to save consumer-side space (msg.clearBody())
  • send msg to service
  • periodically check queue memory utilization
  • if broker memory critical, commit

What I'd like to be able to do is:

  • receive message
  • translate msg and clear the body to save consumer-side space (msg.clearBody())
  • send msg to service
  • periodically check queue disk utilization
  • if broker disk utilization critical, commit

I have looked at the File based cursor and it seems like it could do what I need, but when I tried using it, it did not seem to have the desired effect. A similar question has also been asked in the past on the ActiveMQ user's discussion group.

Update
A clarification. Our issue is not with the number of messages in the open transaction but with the size of the messages. Our application frequently has to deal with large messages (>50MB). Other than this issue, ActiveMQ is working quite well with messages of this size. What we're looking for is a way to trigger something like msg.clearBody() on the broker when memory resources are exhausted. Then if the message contents are required again the broker could reload them from the disk-backed store. We are open to development of a plugin or extension to achieve this.

like image 591
Martin Serrano Avatar asked Apr 10 '14 01:04

Martin Serrano


People also ask

What is prefetch in ActiveMQ?

The prefetch size dictates how many messages will be held in RAM on the client so if your RAM is limited you may want to set a low value such as 1 or 10 etc. Apache, ActiveMQ, Apache ActiveMQ, the Apache feather logo, and the Apache ActiveMQ project logo are trademarks of The Apache Software Foundation.

What is producer and consumer in ActiveMQ?

ActiveMQ sends messages between client applications—producers, which create messages and submit them for delivery, and consumers, which receive and process messages.

Is JMS in memory?

For example, the Java Message Service (JMS) is a messaging system that allows applications to send and receive messages. JMS uses an in-memory queue for storing messages.


1 Answers

As I stated on the ActiveMQ message boards... http://activemq.2283324.n4.nabble.com/Transactions-and-memory-consumption-td4224862.html.

There is no way in ActiveMQ to clear those messages from memory while the transaction is ongoing. The data structure for the transaction holds the message references.

Once in memory for a Queue, a message is only made available to the JVM GC by one of the following:

  • Acknowledgement of the message
  • Expiration of the message
  • Completion of a transaction consuming the message
  • Administrative tools used to purge the Queue

I'm very curious about the use-case as the broker can hold a lot of message in memory, given a good size JVM heap size. What kind of performance numbers are seen and how much improvement is expected by increasing the numbers? It should be possible to gauge the overall improvement by testing with added Session.commit() calls periodically, or, for an XA-Transaction, by using non-transacted Queue consumption.

If there is a long delay on committing downstream, perhaps a strategy to improve throughput using parallel processing can work:

  • Fill a transaction
  • Start commit
  • At the same time, continue consuming incoming messages into the next transaction
  • Fill second transaction
  • Wait for first transaction commit to complete
  • If first transaction failed, roll back the second
  • If first transaction succeeded, commit the second and initiate the next iteration of the same

Of course, in the long run, if the downstream system cannot keep up with the rate of incoming messages, it will be impossible to prevent slow-consumption problems from overloading the broker.

like image 184
ash Avatar answered Oct 16 '22 06:10

ash