I've been playing around with different publish/subscribe implementations for nodeJS and was wondering which one would be best for a specific application. The requirements of the application involve real-time syncing of objects in multi-channel, multi-user 3D environments.
I started off using socket.io, created a basic array of channels, and when users send messages, it loops through the users in that channel and sends a message to the users' client. This worked well, and I had no problems with it.
For object persistence, I added Redis support using node_redis. Then I replaced the client.send loop on the array of channels with Redis pub/sub as a layer of abstraction. But I noticed that I needed to create a new Redis client for each user that made a subscription. And I still needed to store socket.io client information to send messages to on publish. How scalable is that? Are there other (better) implementations or further optimizations I could make? What would you do?
In any pub-sub model, there are two patterns of implementation. One is Push and the other is Pull. The consumer sends a request to pull any messages. Pub/Sub server responds with a message if there are any messages available and not previously consumed.
Cloud Pub/Sub is a fully-managed real-time messaging service that allows you to send and receive messages between independent applications. This document contains links to an API reference, samples, and other resources useful to developing Node. js applications.
Redis and PHP Redis Pub/Sub implements the messaging system where the senders (in redis terminology called publishers) sends the messages while the receivers (subscribers) receive them. The link by which the messages are transferred is called channel. In Redis, a client can subscribe any number of channels.
Redis Pub/Sub is designed for speed (low latency), but only with low numbers of subscribers. Subscribers don't poll and while subscribed/connected are able to receive push notifications very quickly from the Redis broker — in the low milliseconds, even less than 1 millisecond as confirmed by this benchmark.
For object persistence, I added Redis support using node_redis. Then I replaced the client.send loop on the array of channels with Redis pub/sub as a layer of abstraction. But I noticed that I needed to create a new Redis client for each user that made a subscription. And I still needed to store socket.io client information to send messages to on publish. How scalable is that? Are there other (better) implementations or further optimizations I could make? What would you do?
Yes, you have to create a new redis client for every io request. It is heavy and not scalable. But creating a new redis client connection does not consume much memory. So if your system user count is not more than 5000 then it is ok. To scale you can add in slave redis server to resolve the heavy publish and subscribe and if you are concerned about creating a lot of connections then you can increase your OS uLIMIT.
You don't need to store socket.io client in message sent. Once redis received subscribed channel message. It will send message to particular io client.
subscribe.on("message",function(channel,message) {
var msg = { message: [client.sessionId, message] };
buffer.push(msg);
if (buffer.length 15) buffer.shift();
client.send(msg); > });
To subscribe multi channel. I suggest you to pre-store all user with more than one channel(You can use storage Mongodb or redis).
var store = redis.createClient();
var subscriber= redis.createClient()
store.hgetall(UID, function(e, obj){
subscriber.subscribe(obj.ChannelArray.toArray());
})
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With