I have several instances of socket.io with authentication running under HAProxy and I need to force that the authentication request and the socket connection go to the same instance. I've set up HAProxy based on this answer to a SO question with some modifications as so:
global
maxconn 4096 # Total Max Connections. This is dependent on ulimit
nbproc 2
defaults
mode http
frontend all 0.0.0.0:80
timeout client 86400000
default_backend www_backend
acl is_websocket hdr(Upgrade) -i WebSocket
acl is_websocket hdr_beg(Host) -i ws
use_backend socket_backend if is_websocket
backend www_backend
balance url_param sessionId
option forwardfor # This sets X-Forwarded-For
timeout server 30000
timeout connect 4000
server server1 localhost:8081 weight 1 maxconn 1024 check
server server2 localhost:8082 weight 1 maxconn 1024 check
server server3 localhost:8083 weight 1 maxconn 1024 check
backend socket_backend
balance url_param sessionId
option forwardfor # This sets X-Forwarded-For
timeout queue 5000
timeout server 86400000
timeout connect 86400000
server server1 localhost:8081 weight 1 maxconn 1024 check
server server2 localhost:8082 weight 1 maxconn 1024 check
server server3 localhost:8083 weight 1 maxconn 1024 check
I've tried url_param (where sessionId is a querystring parameter passed in both the authentication call and the websocket connection) and source as the balance options but it seems as if HAProxy only allows these options for HTTP connections and so ignores them for the actual websocket connection. The result is that sometimes the auth request and the socket connection end up in different servers, which is unacceptable for our application.
Is there some way to have this desired behavior?
WebSocket connections are inherently sticky. If the client requests a connection upgrade to WebSockets, the target that returns an HTTP 101 status code to accept the connection upgrade is the target used in the WebSockets connection. After the WebSockets upgrade is complete, cookie-based stickiness is not used.
Sticky sessions for Socket.IOA simple and performant way to use Socket.IO within a cluster. Unlike other packages like sticky-session, the routing is based on the session ID (the sid query parameter). See also: sticky-session (routing based on connection. remoteAddress )
As we saw in the performance section of this article, a Socket.IO server can sustain 10,000 concurrent connections.
Scalability problems with Socket.IO js (and therefore Socket.IO) is single threaded, meaning that it cannot take advantage of multi-core processors. In order to do this, we need to run a separate Node. js instance for each core. However, these instances would be completely independent and not share any data.
I use cookie based balancing in this way:
backend socketio
mode http
cookie SIO insert
server sock1 127.0.0.1:8001 cookie 001
server sock2 127.0.0.1:8002 cookie 002
To balance TCP connection, you may have some success with a stickiness table using the stick_match
or stick on
commands and explicitly setting tcp mode.
Here is an example:
# forward SMTP users to the same server they just used for POP in the
# last 30 minutes
backend pop
mode tcp
balance roundrobin
stick store-request src
stick-table type ip size 200k expire 30m
server s1 192.168.1.1:110
server s2 192.168.1.1:110
backend smtp
mode tcp
balance roundrobin
stick match src table pop
server s1 192.168.1.1:25
server s2 192.168.1.1:25
Full documentation available here.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With