Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Nginx, load-balancing using sticky and least connections algorithms simulteniously

We use Nginx as load-balancer for our websocket application. Every backend server keeps session information so every request from client must be forwarded on the same server. So we use ip_hash directive to achieve this:

upstream app {
    ip_hash;
    server 1;
}

The problem appears when we want to add another backend server:

upstream app {
    ip_hash;
    server 1;
    server 2;
}

New connections go to server 1 and server 2 - but this is not what we need in this situation as load on server 1 continues to increase - we still need sticky sessions but least_conn algorithm enabled too - so our two servers receive approximately equal load.

We also considered using Nginx-sticky-module but the documentaton says that if no sticky cookie available it will fall back to round-robin default Nginx algorithm - so it also does not solve a problem.

So the question is can we combine sticky and least connections logic using Nginx? Do you know which other load balancers solve this problem?

like image 945
Alex Emelin Avatar asked Oct 16 '14 15:10

Alex Emelin


People also ask

What algorithm does NGINX use?

For the Hash method, include the consistent parameter to the hash directive; NGINX Plus uses the ketama hashing algorithm, which results in less remapping.

Does NGINX support sticky sessions?

NGINX Plus supports three session persistence methods. The methods are set with the sticky directive.

How does NGINX do load balancing?

Load balancing methods Load balancing with nginx uses a round-robin algorithm by default if no other method is defined, like in the first example above. With round-robin scheme each server is selected in turns according to the order you set them in the load-balancer. conf file.

Which algorithm is best for load balancing?

Round-robin load balancing is the simplest and most commonly-used load balancing algorithm. Client requests are distributed to application servers in simple rotation.


2 Answers

Probably using the split_clients module could help

upstream app {
    ip_hash;
    server 127.0.0.1:8001;
}

upstream app_new {
    ip_hash;
    server 127.0.0.1:8002;
}

split_clients "${remote_addr}AAA" $upstream_app {
    50% app_new;
    *   app;
}

This will split your traffic and create the variable $upstreap_app the one you could use like:

server {
   location /some/path/ {
   proxy_pass http://$upstream_app;
}

This is a workaround to the least_conn and the load balancer that work with sticky sessions, the "downside" is that if more servers need to be added, a new stream needs to be created, for example:

split_clients "${remote_addr}AAA" $upstream_app {
    30% app_another_server;
    30% app_new;
    *   app;
}

For testing:

for x in {1..10}; do \
  curl "0:8080?token=$(LC_ALL=C; cat /dev/urandom | tr -dc 'a-zA-Z0-9' | fold -w 32 | head -n 1)"; done

More info about this module could be found in this article (Performing A/B testing)

like image 188
nbari Avatar answered Sep 20 '22 09:09

nbari


You can easily achieve this using HAProxy and I indeed suggest going through it thoroughly to see how your current setup can benefit.

With HA Proxy, you'd have something like:

backend nodes
    # Other options above omitted for brevity
    cookie SRV_ID prefix
    server web01 127.0.0.1:9000 cookie check
    server web02 127.0.0.1:9001 cookie check
    server web03 127.0.0.1:9002 cookie check

Which simply means that the proxy is tracking requests to-and-fro the servers by using a cookie.

However, if you don't want to use HAProxy, I'd suggest you setup you change your session implementation to use an in-memory DB such as redis/memcached. This way, you can use leastconn or any other algorithm without worrying about sessions.

like image 37
Chibueze Opata Avatar answered Sep 19 '22 09:09

Chibueze Opata