Consider the problem:
I have n number of Tomcat nodes with web application serving some stateless content. For, say, 1000 first requests application must respond with 'a', for next 10000 with 'b' and for the rest - with 'c'.
I considered messaging first: application gets total served count from some storage -> if it's smaller than n then serve content 'a' -> once content is served, application sends a message -> message gets consumed -> total served count is incremented on some storage -> ... However in this case is quite a big probability of overshoot because of slight (or huge on peek load time) delay between message served event and counter increment on the storage.
Then I considered to setup memcached-session-manager to store the counters in sort of shared session. But this looks like quite heavy for my simple case.
Could someone please suggest whether there are any straightforward way how multiple JVM instances can communicate with each other (what is applicable for my case)?
If it absolutely has to be correct and you don't want delay than I think Redis or Hazlecast are your best options. Particularly Redis as it has atomic count like operations. While you can in theory do the same with memcache, Redis is sort of designed for this exact use case (statistic counters).
You could also use a in memory db like H2 or just set a Postgres table to unlogged
or whatever is applicable for your RDBMS to keep a not so safe in pseudo in memory table. The annoying thing about RDBMS is that upserting aka MERGE
are not supported consistently across all RDBMS.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With