I wonder if I can do request-reply with this:
The 1st application also receives the response on another queue posted by the second application.
Is it a good way to proceed? Or do you think of a better solution?
Thanks!
The last couple of days I also worked on a "soa like" solution using hazelcast queues to communicate between different processes on different machines.
My main goals were to have
"one to one-of-many" communication with garanteed reply of one-of-the-many's
"one to one" communication one way
"one to one" communication with answering in a certain time
To make a long story short, I dropped this approach today because of the follwoing reasons:
lots of complicated code with executor services, callables, runnables, InterruptedException's, shutdown-handling, hazelcast transactions, etc
dangling messages in case of the "one to one" communciation when the receiver has shorter lifetime than the sender
loosing messages if I kill certain cluster member(s) at the right time
all cluster members must be able to deserialize the message, because it could be stored anywhere. Therefore the messages can't be "specific" for certain clients and services.
I switched over to a much simpler approach:
all "services" register themselves in a MultiMap ("service registry") using the hazelcast cluster member UUID as key. Each entry contains some meta information like service identifier, load factor, starttime, host, pid, etc
clients pick a UUID of one of the entries in that MultiMap and use a DistributedTask (distributed executor service) for the choosen specific cluster member to invoke the service and optionally get a reply (in time)
only the service client and the service must have the specific DistributedTask implementation in their classpath, all other cluster members are not bothered
clients can easily figure out dead entries in the service registry themselves: if they can't see a cluster member with the specific UUID (hazelcastInstance.getCluster().getMembers()), the service died probably unexpected. Clients can then pick "alive" entries, entries which fewer load factor, do retries in case of idempotent services, etc
The programming gets very easy and powerful using the second approach (e.g. timeouts or cancellation of tasks), much less code to maintain.
Hope this helps!
In the past we have build a SOA system that uses Hazelcast queue's as a bus. Here is some of the headlines.
a. Each service has an income Q. Simply service name is the name of the queue. You can have as many service providers as you wish. You can scale up and down. All you need is these service providers to poll this queue and process the arrived requests.
b. Since the system is fully asynchronous, to correlate request and response, there is also a call id both on request and response.
c. Each client sends a request into the queue of the service that it wants to call. The request has all the parameters for the service, a name of the queue to send the response and a call id. A queue name simply can be the address of the client. This way each client will have it's own unique queue.
d. Upon receiving the request, a service provider processes it and sends the response to the answer queue
e. Each client also continuously polls its input queue to receive the answers for the requests that it send.
The major drawback with this design is that the queues are not as scalable as maps. Thus it is not very scalable. Hoever it still can process 5K requests per seconds.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With