Some background, before getting to the real question:
I am working on a back-end application that consists of several different modules. Each module is, currently, a command-line java application, which is run "on demand" (more details later).
Each module is a "step", part of a bigger process that you can think of as a data flow; the first step collects data files from an external source and pushes/loads them into some SQL database tables; then the following steps, based on different conditions and events (timing, presence of data in the DB, messages and elaborations done through a web-service/web-interface), take data from (1 or more) DB tables, process them, and write them down on different tables. Steps run on three different servers, and read data from three different DBs, but write only in a single DB. The purpose is to aggregate data, compute metrics and statistics.
Currently, each module is executed periodically (from a few minutes/hours for the first modules, to few days for the last in the chain, which need to aggregate more data and therefore wait "longer" from them to be available), using a cronjob. A module (currently, a java console application) is run, and it checks the database for new, unprocessed information in a given datetime-window, and do its job.
The problem: it works, but.. I need to expand and maintain it, and this approach is starting to show its limits.
To solve the second problem, I am going to introduce "partial", or "incremental" computations: as long as I have a set of relevant information, I process it. Then, when some other linked information arrives, I compute the difference and update the data accordingly, but then I need also to notify other (dependent) modules.
- 1) Which is the best way to do it?
- 2) Related: which is the best way to "notify" other modules (java executables, in my case) that a relevant data is available?
I can see three ways:
Edit: I am convinced that an approach based on queues is the way to go, I added the "table + polling" option for completeness but now I understand it is only a distraction (obviously, everyone is going to answer "yes, use queues, polling is evil" - and rightly so!). So let me rephrase the question to: What are the advantages/disadvantages of using a MQ over a key-value store with pub/sub like Redis?
Edit: in particular, in may case, it means: is there a mechanism in some MQ and/or key-value store that lets me publish messages with a "time"? Like "deliver it in 1 day"? With persistence and "almost once" delivery guarantee, obviously
Edit: this is the bit I care about most: I would like to "queue" itself to activate "modules" based on messages in the queue, similar to MSMQ Activation. Is it a good idea? Is there anything in the Java world that does it, should I implement it myself (over an MQ or over Redis), or should I run each module as a daemon? (even if some computations typically happen in bursts, two hour long processing followed by two days of idling?)
NOTE: I cannot use heavy containers/EJB (No Glassfish or similar)
Edit: Camel as well seems a little too heavy for me. I'm looking for something really light here, both in terms of resources and complexity of development
The queue task descriptions partially sound like things systems based on "enterprise integration patterns" like Apache Camel do.
A delayed message can be expressed by constants
from("seda:b").delay(1000).to("mock:result");
or variables, for example a message header value
from("seda:a").delay().header("MyDelay").to("mock:result");
1> I suggest using a message queue, choose the queue depending on your requirements, but for most cases any one would do, I suggest you choose a queue based on protocol JMS (active mq) or AMQP (rabbit mq) and write a simple wrapper over it or use the ones provided by spring- > spring-jms or spring-amqp
2> You can write queue consumers such that they notify your system that a new message arrives for example in rabbit you can implement the MessageListener interface
public class MyListener implements MessageListener {
@Override
public void onMessage(Message message) {
/* Handle the message */
}
}
3> If you use async consumers like in <2> you can get rid of all polling and cron jobs
4> Depends on your requirements -> If you have millions of events/messages passing through your queue then running the queue middle-ware on a centralized server makes sense.
5> If resource consumption is not an issue then keeping your consumers/subscribers running all the while is the easiest way to go. if these consumers are distributed then you can orchestrate them using a service like zookeeper
6> Scalability -> Most queuing systems provide for easy distribution of messages, so provided that your consumers are stateless, then scaling is possible just by adding new consumers and some configuration.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With