Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Is it a good practice to use JMS Temporary Queue for synchronous use?

If we use JMS request/reply mechanism using "Temporary Queue", will that code be scalable?

As of now, we don't know if we will supporting 100 requests per second, or 1000s of requests per second.

The code below is what I am thinking of implementing. It makes use of JMS in a 'Synchronous' fashion. The key parts are where the 'Consumer' gets created to point a 'Temporary Queue' that was created for this session. I just can't figure out whether using such Temporary Queues is a scalable design.

  destination = session.createQueue("queue:///Q1");
  producer = session.createProducer(destination);
  tempDestination = session.createTemporaryQueue();
  consumer = session.createConsumer(tempDestination);

  long uniqueNumber = System.currentTimeMillis() % 1000;
  TextMessage message = session
      .createTextMessage("SimpleRequestor: Your lucky number today is " + uniqueNumber);

  // Set the JMSReplyTo
  message.setJMSReplyTo(tempDestination);

  // Start the connection
  connection.start();

  // And, send the request
  producer.send(message);
  System.out.println("Sent message:\n" + message);

  // Now, receive the reply
  Message receivedMessage = consumer.receive(15000); // in ms or 15 seconds
  System.out.println("\nReceived message:\n" + receivedMessage);

Update:

I came across another pattern, see this blog The idea is to use 'regular' Queues for both Send and Receive. However for 'Synchronous' calls, in order to get the desired Response (i.e. matching the request), you create a Consumer that listens to the Receive queue using a 'Selector'.

Steps:

    // 1. Create Send and Receive Queue.
    // 2. Create a msg with a specific ID
 final String correlationId = UUID.randomUUID().toString();
 final TextMessage textMessage = session.createTextMessage( msg );
 textMessage.setJMSCorrelationID( correlationId );

    // 3. Start a consumer that receives using a 'Selector'.
           consumer = session.createConsumer( replyQueue, "JMSCorrelationID = '" + correlationId + "'" );

So the difference in this pattern is that we don't create a new temp Queue for each new request. Instead all responses come to only one queue, but use a 'selector' to make sure each request-thread receives the only the response that is cares about.

I think the downside here is that you have to use a 'selector'. I don't know yet if that is less preferred or more preferred than earlier mentioned pattern. Thoughts?

like image 639
rk2010 Avatar asked May 28 '12 14:05

rk2010


People also ask

How should I implement request response with JMS?

The best way to implement request-response over JMS is to create a temporary queue and consumer per client on startup, set JMSReplyTo property on each message to the temporary queue and then use a correlationID on each message to correlate request messages to response messages.

How to rollback JMS message?

Step 3: Commit or Roll Back the JMS Transacted SessionThe rollback() method cancels any messages sent during the current transaction and returns any messages received to the messaging system. If either the commit() or rollback() methods are issued outside of a JMS transacted session, a IllegalStateException is thrown.

What are JMS sessions?

A session is a single-threaded context for producing and consuming messages. You use sessions to create the following: Message producers. Message consumers.


3 Answers

Regarding the update in your post - selectors are very efficient if performed on the message headers, like you are doing with the Correlation ID. Spring Integration also internally does this for implementing a JMS Outbound gateway.

like image 190
Biju Kunjummen Avatar answered Oct 18 '22 09:10

Biju Kunjummen


Interestingly, the scalability of this may actually be the opposite of what the other responses have described.

WebSphere MQ saves and reuses dynamic queue objects where possible. So, although use of a dynamic queue is not free, it does scale well because as queues are freed up, all that WMQ needs to do is pass the handle to the next thread that requests a new queue instance. In a busy QMgr, the number of dynamic queues will remain relatively static while the handles get passed from thread to thread. Strictly speaking it isn't quite as fast as reusing a single queue, but it isn't bad.

On the other hand, even though indexing on CORRELID is fast, performance is inverse to the number of messages in the index. It also makes a difference if the queue depth begins to build. When the app goes a GET with WAIT on an empty queue there is no delay. But on a deep queue, the QMgr has to search the index of existing messages to determine that the reply message isn't among them. In your example, that's the difference between searching an empty index versus a large index 1,000s of times per second.

The result is that 1000 dynamic queues with one message each may actually be faster than a single queue with 1000 threads getting by CORRELID, depending on the characteristics of the app and of the load. I would recommend testing this at scale before committing to a particular design.

like image 31
T.Rob Avatar answered Oct 18 '22 11:10

T.Rob


Using selector on correlation ID on a shared queue will scale very well with multiple consumers.

1000 requests / s will however be a lot. You may want to divide the load a bit between different instances if the performance turns out to be a problem.

You might want to elaborate on the requests vs clients numbers. If the number of clients are < 10 and will stay rather static, and the request numbers are very high, the most resilient and fast solution might be to have static reply queues for each client.

like image 44
Petter Nordlander Avatar answered Oct 18 '22 11:10

Petter Nordlander