We use Nginx as load-balancer for our websocket application. Every backend server keeps session information so every request from client must be forwarded on the same server. So we use ip_hash
directive to achieve this:
upstream app {
ip_hash;
server 1;
}
The problem appears when we want to add another backend server:
upstream app {
ip_hash;
server 1;
server 2;
}
New connections go to server 1 and server 2 - but this is not what we need in this situation as load on server 1 continues to increase - we still need sticky sessions but least_conn
algorithm enabled too - so our two servers receive approximately equal load.
We also considered using Nginx-sticky-module
but the documentaton says that if no sticky cookie available it will fall back to round-robin default Nginx algorithm - so it also does not solve a problem.
So the question is can we combine sticky and least connections logic using Nginx? Do you know which other load balancers solve this problem?
For the Hash method, include the consistent parameter to the hash directive; NGINX Plus uses the ketama hashing algorithm, which results in less remapping.
NGINX Plus supports three session persistence methods. The methods are set with the sticky directive.
Load balancing methods Load balancing with nginx uses a round-robin algorithm by default if no other method is defined, like in the first example above. With round-robin scheme each server is selected in turns according to the order you set them in the load-balancer. conf file.
Round-robin load balancing is the simplest and most commonly-used load balancing algorithm. Client requests are distributed to application servers in simple rotation.
Probably using the split_clients module could help
upstream app {
ip_hash;
server 127.0.0.1:8001;
}
upstream app_new {
ip_hash;
server 127.0.0.1:8002;
}
split_clients "${remote_addr}AAA" $upstream_app {
50% app_new;
* app;
}
This will split your traffic and create the variable $upstreap_app
the one you could use like:
server {
location /some/path/ {
proxy_pass http://$upstream_app;
}
This is a workaround to the least_conn
and the load balancer that work with sticky sessions, the "downside" is that if more servers need to be added, a new stream needs to be created, for example:
split_clients "${remote_addr}AAA" $upstream_app {
30% app_another_server;
30% app_new;
* app;
}
For testing:
for x in {1..10}; do \
curl "0:8080?token=$(LC_ALL=C; cat /dev/urandom | tr -dc 'a-zA-Z0-9' | fold -w 32 | head -n 1)"; done
More info about this module could be found in this article (Performing A/B testing)
You can easily achieve this using HAProxy and I indeed suggest going through it thoroughly to see how your current setup can benefit.
With HA Proxy, you'd have something like:
backend nodes
# Other options above omitted for brevity
cookie SRV_ID prefix
server web01 127.0.0.1:9000 cookie check
server web02 127.0.0.1:9001 cookie check
server web03 127.0.0.1:9002 cookie check
Which simply means that the proxy is tracking requests to-and-fro the servers by using a cookie.
However, if you don't want to use HAProxy, I'd suggest you setup you change your session implementation to use an in-memory DB such as redis/memcached. This way, you can use leastconn or any other algorithm without worrying about sessions.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With