We're currently writing an application for which IT has already purchased hardware for. Their approach was to buy big hardware on which we would deploy. In order to add more processing, they plan on adding additional servers with identical software. In order to accomodate this design, we are using Terracotta to provide the ability to run multiple JVM's as though it was one large one. Regardless of whether or not this is a wise way to go (which I'm still not convinced), this is the situation I'm dealing with.
Anyway, we have a portion of the application that uses a standard producer/consumer type queue. With Terracotta, we're able to create a single queue that works with multiple JVM's. This is pretty slick and it works well.
But now, we are finding additional opportunities to run asynchronuous processes. To make all of our queuing logic more consistent, we're considering using JMS to abstract out common logic. Since we're not going to use JMS as a remote queue (at least for the foreseeable future), I'm wondering if JMS is just adding unneeded complexity.
Any suggestions or thoughts? Should we just continue to build queues as concurrent structures, or treat them as separate, potentially remote objects?
A message queue is essentially just Queue data-structure that has some fancy options. If your project is like most other projects, you're not using any of the JMS features that make JMS different from any old Queue implementation, especially since Terracotta is handling persistence and distribution.
So JMS is probably just adding complexity to your application, which is something JMS is quite good at. Like all unneeded drivers of complexity, get rid of it. If you ever decide to use JMS for one or more reasons, do it then.
A colleague of mine has been using Mule, which allows you to define queues which may be intra- or inter-JVM queues.
I agree with krosenwald: it's not clear what JMS would be adding in your case, unless there is a general plan to either move away from Terracotta (or at least have the option to).
I haven't used Terracotta but we are using a distributed caching product very similar to it. Our architecture also sounds similar to what you have. With both producers and consumers sitting on the same cache and sharing data using the caching subsystem.
While I agree on principal that adding JMS now might be an unneccesary complexity for you, we have found that, while slick, the distributed cache is not the best implementation of a messaging mechanism. While the same sematics can be created, some small details cause issues (such as load-balancing for consumers, which might require additional synronisation with a distributed cache, but works naturally with JMS.)
If you think your future use cases require more pub-sub semantics with persistence etc, you might want to start thinking about JMS. Also, consider separation of concerns. You are using Terracotta to distribute data (which is what it is designed to do). Will you also use it to distribute control instructions (which is done better with messaing semantics)?
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With