Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Node clustering with websockets

I have a node cluster where the master responds to http requests. The server also listens to websocket connections (via socket.io). A client connects to the server via the said websocket. Now the client choses between various games (with each node process handles a game).

The questions I have are the following:

  • Should I open a new connection for each node process? How to tell the client that he should connect to the exact node process X? (Because the server might handle incoming connection-requests on its on)
  • Is it possible to pass a socket to a node process, so that there is no need for opening a new connection?
  • What are the drawbacks if I just use one connection (in the master process) and pass the user messages to the respective node processes and the process messages back to the user? (I feel that it costs a lot of CPU to copy rather big objects when sending messages between the processes)
like image 556
InsOp Avatar asked Mar 24 '16 15:03

InsOp


People also ask

Is Node JS good for WebSockets?

Node. js can maintain many hundreds of WebSockets connections simultaneously. WebSockets on the server can become complicated as the connection upgrade from HTTP to WebSockets requires handling. This is why developers commonly use a library to manage this for them.

Should I use WebSockets or SignalR?

It uses WebSockets whenever possible. For most applications, we recommend SignalR over raw WebSockets. SignalR provides transport fallback for environments where WebSockets isn't available. It also provides a basic remote procedure call app model.

Does SignalR use WebSockets?

SignalR uses the new WebSocket transport where available and falls back to older transports where necessary. While you could certainly write your app using WebSocket directly, using SignalR means that a lot of the extra functionality you would need to implement is already done for you.

Do WebSockets work with load balancers?

The load balancer knows how to upgrade an HTTP connection to a WebSocket connection and once that happens, messages will travel back and forth through a WebSocket tunnel. However, you must design your system for scale if you plan to load balance multiple WebSocket servers.


1 Answers

Is it possible to pass a socket to a node process, so that there is no need for opening a new connection?

You can send a plain TCP socket to another node process as described in the node.js doc here. The basic idea is this:

const child = require('child_process').fork('child.js');
child.send('socket', socket);

Then, in child.js, you would have this:

process.on('message', (m, socket) => {
  if (m === 'socket') {
    // you have a socket here
  }
});

The 'socket' message identifier can be any message name you choose - it is not special. node.js has code that when you use child.send() and the data you are sending is recognized as a socket, it uses platform-specific interprocess communication to share that socket with the other process.

But, I believe this only works for plain sockets that do not yet have any local state established yet other than the TCP state. I have not tried it with an established webSocket connection myself, but I assume it does not work for that because once a webSocket has higher level state associated with it beyond just the TCP socket (such as encryption keys), there's a problem because the OS will not automatically transfer that state to the new process.

Should I open a new connection for each node process? How to tell the client that he should connect to the exact node process X? (Because the server might handle incoming connection-requests on its on)

This is probably the simplest means of getting a socket.io connection to the new process. If you make sure that your new process is listening on a unique port number and that it supports CORS, then you can just take the socket.io connection you already have between the master process and the client and send a message to the client on it that tells the client where to reconnect to (what port number). The client can then contain code to listen for that message and make a connection to that new destination.

What are the drawbacks if I just use one connection (in the master process) and pass the user messages to the respective node processes and the process messages back to the user? (I feel that it costs a lot of CPU to copy rather big objects when sending messages between the processes)

The drawbacks are as you surmise. Your master process just has to spend CPU energy being the middle man forwarding packets both ways. Whether this extra work is significant to you depends entirely upon the context and has to be determined by measurement.


Here's ome more info I discovered. It appears that if an incoming socket.io connection that arrives on the master is immediately shipped off to a cluster child before the connection establishes its initial socket.io state, then this concept could work for socket.io connections too.

Here's an article on sending a connection to another server with implementation code. This appears to be done immediately at connection time so it should work for an incoming socket.io connection that is destined for a specific cluster. The idea here is that there's sticky assignment to a specific cluster process and all incoming connections of any kind that reach the master are immediately transferred over to the cluster child before they establish any state.

like image 135
jfriend00 Avatar answered Oct 03 '22 08:10

jfriend00