We are using nodejs + socketIO with transport type as polling as we have to pass token in headers so that we can authenticate the client so i cannot avoid polling transport type.
Now we are using nginx and 4 socket application for this.
I am getting two problem because of this.
When polling call finishes and upgrade to websocket transport type I am getting 400 bad request. That i got to know is because the second request is landing on other socket server which is rejecting this transport type websocket.
these connection are getting triggers to rapidly even once the websocket connection is successful.


This problem#2 comes only when we are running multiple instance of socket server. with single server its works fine and connection doent terminates
When using NGINX as a load balancer to implement reverse proxy to a multi-istance websocket application you have to configure Nginx so that each time a connection is made to an istance, all consecutive requests from the same client should be proxied to the same istance, to avoid unwanted disconnections. Basically you want to implement sticky sessions.
This is well documented in the Socket.io official documentation.
http {
server {
listen 3000;
server_name io.yourhost.com;
location / {
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $host;
proxy_pass http://nodes;
# enable WebSockets
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
}
}
upstream nodes {
# enable sticky session with either "hash" (uses the complete IP address)
hash $remote_addr consistent;
# or "ip_hash" (uses the first three octets of the client IPv4 address, or the entire IPv6 address)
# ip_hash;
# or "sticky" (needs commercial subscription)
# sticky cookie srv_id expires=1h domain=.example.com path=/;
server app01:3000;
server app02:3000;
server app03:3000;
}
}
The key line is hash $remote_addr consistent;, declared inside the upstream block.
Note that here there are 3 different socket istances deployed on hosts app01, app02, and app03 (always port 3000). If you want to run all of your istances on the same host, you should run them on different ports (example: app01:3001, app02:3002, app03:3003).
Moreover, note that if you have multiple socket server istances with several clients connected, you want that client1 connected to ServerA should be able to "see" and communicate with client2 connected to ServerB. To do this, you want ServerA and ServerB to communicate or at least to share informations. Socket.io can handle this for you with a small effort by using a Redis istance and the redis-adapter module. Check this part of the socket.io documentation.
Final note: both links I shared are from the same socket.io doc page but they point to a different section of the page. I strongly suggest you to read the whole page to have a complete overview about the whole architecture.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With