Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Cookie based Load Balancing for WebSockets?

My situation is that we currently write an online application which uses Node.js on server side with WebSocket listener. We have two different parts: one serves pages and uses node.js and express+ejs, another is a completely different app which only includes socket.io library for websockets. So here we come to this issue of scalability of websockets part.

One solution we've found is to use redis and share sockets information among servers, but due to architecture it will require sharing of loads of other information, which is going to create huge overhead on servers.

After this intro, my question is - is it possible to use cookie based load balancing for websockets? So that lets say every connection from user with cookie server=server1 will always be forwarded to server1 and every connection with cookie server=server2 will be fw to server2 and connection with no such cookie will be fw to least busiest server.

UPDATE: As one 'answer' says -- yes, i know this exists. Just did not remember that name is sticky session. But the question is -- will that work for websockets? Are there any possible complications?

like image 803
Alexey Kamenskiy Avatar asked Jul 25 '12 14:07

Alexey Kamenskiy


1 Answers

We had a similar problem show up in our Node.js production stack. We have two servers using WebSockets which work for normal use cases, but occasionally the load balancer would bounce these connection between the two servers which would cause problems. (We have Session code in place on the backend that should have fixed it, but did not properly handle it.)

We tried enabling Sticky Session on the Barracuda load balancer in front of these servers but found that it would block WebSocket traffic due to how it operated. I have not researched exactly why, as little information is available online, but it appears that this is due to how the balancer strips off the headers for an HTTP request, grabs the cookie, and forwards the request to the correct backend server. Since WebSockets starts off as HTTP but then upgrades, the load balancer did not notice the difference in the connection and would try to do the same HTTP processing. This would cause the WebSocket connection to fail, disconnecting the user.

The following is what we currently have in place which is working very well. We still use the Barracuda load balancers in front of our backend servers, but we do not have Sticky Sessions enabled on the load balancers. On our backend servers, in front of our application server is HAProxy which does properly support WebSockets, and can provide Sticky Sessions in a 'roundabout' way.


Request Flow List

  1. Incoming Client request hits primary Barracuda Load Balancer
  2. Load Balancer forwards to either of the active backend servers
  3. HAProxy receives the request and checks for the new 'sticky cookie'
  4. Based on the cookie, HAProxy forwards to the correct backend application server

Request Flow Diagram

 WebSocket Request  /--> Barracuda 1 -->\   /--> Host 1 -->\   /--> App 1
------------------->                     -->                -->
                    \--> Barracuda 2 -->/   \--> Host 2 -->/   \--> App 1

When the arrows come back to one request, that means the request can flow to either point in the flow.


HAProxy Configuration Details

backend app_1
   cookie ha_app_1 insert
   server host1 10.0.0.101:80011 weight 1 maxconn 1024 cookie host_1 check
   server host2 10.0.0.102:80011 weight 1 maxconn 1024 cookie host_2 check

In the above configuration:

  • cookie ha_app_1 insert is the actual cookie name used
  • cookie host_1 check or cookie host_2 check sets the cookie value
like image 96
Myles Steinhauser Avatar answered Sep 25 '22 12:09

Myles Steinhauser