I am trying to use nginx
for loadbalancing. I have to use ip_hash
because I work with websockets. Following is the configuration:
#user nobody;
worker_processes 3;
events {
worker_connections 1024;
}
http {
include mime.types;
default_type application/octet-stream;
sendfile on;
#tcp_nopush on;
#keepalive_timeout 0;
keepalive_timeout 65;
#gzip on;
upstream my_http_servers {
ip_hash;
server 127.0.0.1:3001;
server 127.0.0.1:3004;
server 127.0.0.1:3003;
}
server {
listen 3000;
server_name localhost;
#charset koi8-r;
#access_log logs/host.access.log main;
location / {
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $host;
proxy_pass http://my_http_servers;
# enable WebSockets
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
}
}
}
Now I have all the 3 servers and nginx running locally on machine 1 (ip: 192.168.10.2)
.
I also have a frontend application which calls this backend server. My frontend runs on 192.168.10.2:4200
.
When I call the http://192.168.10.2:4200
from machine1, it goes to say server1
.
From my machine2 which is connected to the same WIFI (ip: 192.168.10.23)
, I call http://192.168.10.2:4200
, but it still goes to server1
.
ip_hash
is not correctly doing load balancing. I am not sure what I am doing wrong, I understand ip_hash will be a sticky connection, so all requests from machine1 should go to server1 but from machine2 it should go to some other servers?
Edit:
I even tried using hash $remote_addr;
instead of ip_hash
, but still all requests are going to the same single server. This is my configuration using hash:
worker_processes 3;
events {
worker_connections 1024;
}
http {
include mime.types;
default_type application/octet-stream;
sendfile on;
keepalive_timeout 65;
upstream my_http_servers {
hash $remote_addr;
server 127.0.0.1:3001;
server 127.0.0.1:3002;
server 127.0.0.1:3003;
}
server {
listen 3000;
server_name localhost;
location / {
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $host;
proxy_pass http://my_http_servers;
# enable WebSockets
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
}
}
}
According to the docs
The first three octets of the client IPv4 address, or the entire IPv6 address, are used as a hashing key.
So as an example all these addresses 192.168.1.* , will be mapped to the same server.
If your server is running in your office network and both of the machine you tested are also connected to your office network, it probably won't work because usually office networks are configured such that all devices will get ip addresses with the same three octets.
If both the machines you used are running on the same office network then they probably have the same external ip, so they will also be mapped to the same server.
If you run both machines with an actual different external ips with where the first three octets are different, there is still a 33% chance that hashing two different ips will result in passing both of them to the same server
But if you use "hash" directive instead of "ip_hash" then you can combine several request variables into hash calculation. Example:
hash '$remote_addr $cookie_zzz $http_user_agent';
When you use remote IP addresses in directive "hash" , they (IP addresses) are treated as ordinary variables and can be used for round-robin upstreaming.
hash '$remote_addr';
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With