Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Socket.io distribute across different servers

I wanted to setup the socket.io server across 3 different machines. I have a load balancer setup for distributing the requests across the different servers, but how would I distribute the socket object that I get in the connection function in socket.io to the different servers? I know that we could use the RedisStore pub/sub for publishing and scribing to redis events, but suppose if I have a client A who has connected to machine 1 and if I have a client B connected to machine 3. How would client A send a message to client B? Or is there some other architecture in socket.io that I could use to achieve that?

like image 755
anonymous123 Avatar asked Feb 15 '15 22:02

anonymous123


People also ask

How much traffic can Socket.IO handle?

Once you reboot your machine, you will now be able to happily go to 55k concurrent connections (per incoming IP).

How many Socket.IO connections can a server handle?

As we saw in the performance section of this article, a Socket.IO server can sustain 10,000 concurrent connections.

Is Socket.IO multithreaded?

While not comparable to eventlet and gevent in terms of performance, the Socket.IO server can also be configured to work with multi-threaded web servers that use standard Python threads. This is an ideal setup to use with development servers such as Werkzeug.

Is Socket.IO synchronous?

Yes, all NodeJS I/O is (or should be) asynchronous.


1 Answers

As mentioned in the comments, a socket is a connection between two different machines (the client and the server in this case), so it can't be shared. For a basic structure to solve this problem, you'll need 3 components: the load balancer, the socket server, and a messaging system (Redis, RabbitMQ, etc.)

The Load Balancer

The outer most layer is the load balancer. Not all browsers support web sockets, so web socket libraries provide an HTTP polling failover. Essentially, a client initiates a handshake over HTTP, and the server tries to upgrade your connection to a bidirectional binary protocol built on top of straight TCP. If successful, you have a one-on-one connection with the server via web sockets. If it fails, the socket libraries will handle requests over HTTP with polling to simulate a web socket.

Thus, your load balancer must be configured to make sure that all of a client's requests go to the same server, or multiple servers could think that the client receives communication from them.

The Server and Redis

As expected, the server will handle all of the client connections and manage all communication to and from the client. However, when a client sends a message, the server will publish the message to Redis, and Redis will notify it's subscribers that a new message is available. On getting a message from Redis, each server will notify the appropriate clients.

If we were to step through your scenario, it might look something like this:

Resources

  1. A load balancer
  2. 3 socket servers (machines 1, 2, and 3)
  3. Redis
  4. Clients (clients A and B)

Steps

  1. Client A requests a socket connection
    • Load balancer routes to machine 1 and will route all future requests there as well, if necessary
    • Socket server tries to upgrade client to a bidirectional socket
    • Server stores client with details
  2. Client B requests a socket connection
    • Same as step 1, but connects with machine 3
  3. Client A sends a message to client B
    • Client A sends a message to machine 1
    • Machine 1 publishes the message to Redis
    • Redis notifies subscribers of the new message, including Machine 3
    • Machine 3 notifies client B of the message

Of course, there are optimizations around not sending notifications to every single machine that would be useful at scale, but hopefully this is a good summary of what you'd need.

like image 175
EmptyArsenal Avatar answered Oct 19 '22 02:10

EmptyArsenal