I am trying to scale a simple socket.io app across multiple processes and/or servers.
Socket.io supports RedisStore but I'm confused as to how to use it.
I'm looking at this example, http://www.ranu.com.ar/post/50418940422/redisstore-and-rooms-with-socket-io
but I don't understand how using RedisStore in that code would be any different from using MemoryStore. Can someone explain it to me?
Also what is difference between configuring socket.io to use redisstore vs. creating your own redis client and set/get your own data?
I'm new to node.js, socket.io and redis so please point out if I missed something obvious.
First, Socket.IO creates a long-polling connection using xhr-polling. Then, once this is established, it upgrades to the best connection method available.
Although Socket.IO indeed uses WebSocket for transport when possible, it adds additional metadata to each packet. That is why a WebSocket client will not be able to successfully connect to a Socket.IO server, and a Socket.IO client will not be able to connect to a plain WebSocket server either.
socket.io rooms are a lightweight data structure. They are simply an array of connections that are associated with that room. You can have as many as you want (within normal memory usage limits). There is no heavyweight thing that makes a room expensive in terms of resources.
but I don't understand how using RedisStore in that code would be any different from using MemoryStore. Can someone explain it to me?
The difference is that when using the default MemoryStore
, any message that you emit in a worker will only be sent to clients connected to the same worker, since there is no IPC between the workers. Using the RedisStore
, your message will be published to a redis server, which all your workers are subscribing to. Thus, the message will be picked up and broadcast by all workers, and all connected clients.
Also what is difference between configuring socket.io to use redisstore vs. creating your own redis client and set/get your own data?
I'm not intimately familiar with RedisStore
, and so I'm not sure about all differences. But doing it yourself would be a perfectly valid practice. In that case, you could publish all messages to a redis server, and listen to those in your socket handler. It would probably be more work for you, but you would also have more control over how you want to set it up. I've done something similar myself:
// Publishing a message somewhere var pub = redis.createClient(); pub.publish("messages", JSON.stringify({type: "foo", content: "bar"})); // Socket handler io.sockets.on("connection", function(socket) { var sub = redis.createClient(); sub.subscribe("messages"); sub.on("message", function(channel, message) { socket.send(message); }); socket.on("disconnect", function() { sub.unsubscribe("messages"); sub.quit(); }); });
This also means you have to take care of more advanced message routing yourself, for instance by publishing/subscribing to different channels. With RedisStore
, you get that functionality for free by using socket.io channels (io.sockets.of("channel").emit(...)
).
A potentially big drawback with this is that socket.io sessions are not shared between workers. This will probably mean problems if you use any of the long-polling transports.
I set up a small github project to use redis as datastore.
Now you can run multiple socket.io server processes.
https://github.com/markap/socket.io-scale
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With