Every n seconds application is requesting a remote JSON file that provides live prices for securities in the Trading system. JSON has a block with the data I need (marketdata
) and a block with the current dataversion
(version
and seqnum
).
Right now I use ActionController::Live
(with EventSource
on the client side) to push updated data to the browser. All actions are done within one method:
seqnum
value;So my goal now is to separate pulling & updating the database (ActiveJob
) with pushing updated values to the browser (ActionController::Live
). To accomplish this I need:
seqnum
& version
to share between controller and background job;updated_at
fields. So basically I have two questions:
Considering the fact that you might have, for example, multiple rails process running, I believe it becomes quite hard for you to let activejob talk directly to rails controller in some way.
Defintely store seqnum
and version
, I wouldn't rely on updated_at
in any case, it's too easy to get it updated randomly and so end up sending stuff to the client without any real reason. Also in this case they seem like very solid fields to point out if the file has been updated.
That being said, you want to "signal" ActionController::Live
in some way and I'm afraid polling here is your only option, unless on your client side there is a specific moment when it needs to know if the file has been updated, in which case you might want to use websockets or something similar.
So, something like
cached_request = YourCachedRequest.latest # Assuming it returns a single record
updated = true
loop do
if updated
updated = false
response.stream.write cached_request.serialize_in_some_way
end
current_version = cached_request.version # use seqnum too if you need
cached_request = cached_request.reload
updated = true if cached_request.version > current_version
sleep 20.0
end
If you want an option that doesn't involve polling, you can only go for websockets I believe. However you have a more efficient option:
Create a mini application (evenmachine/sinatra/something light) where the clients will poll (you can pass through your main application to distribute this to differente nodes of this mini application), the point of this app is only to reroute messages from your main application to polling clients.
Now, you can create an internal API endpoint for your main application that it's used only by delayed job. Delayed job will hit this endpoint only when it notices that the fetched JSON is actually updated relative to the one currently stored. If that's the case, it will hit your main app API endpoint which in turn will send a message (again, probably through an HTTP API endpoint, this time on your mini app) to all your mini app instances, which in turn will send them to your clients.
In this way, you don't overload your main server but only these mini-nodes which can have localized outages (which is a big advantage, instead of having a big system outage).
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With