I'm trying to load balance an API server using nginx and docker's native DNS.
I was hoping nginx will round-robin API calls to all available servers. But even when I specify docker's DNS server as the resolver nginx forward the request to only one server.
Relevant section from docker-compose.yml
proxy:
restart: always
build: ./src/nginx/.
ports:
- "8080:8080"
links:
- api:servers.api
nginx.conf
worker_processes 2;
events { worker_connections 1024; }
http {
sendfile on;
server {
listen 8080;
location / {
resolver_timeout 30s;
resolver 127.0.0.11 ipv6=off valid=10s;
set $backend http://servers.api:80;
proxy_pass $backend;
proxy_redirect off;
}
}
}
NGINX round-robin load balancer works if I manually specify each server, which I don't want to do since it can't scale automatically.
worker_processes 2;
events { worker_connections 1024; }
http{
sendfile on;
upstream api_servers{
server project_api_1:80;
server project_api_2:80;
server project_api_3:80;
}
server{
listen 8080;
location / {
proxy_pass http://api_servers;
proxy_redirect off;
}
}
}
How to configure nginx in such a way that it can detect new containers added and include them in the round-robin?
It is possible to use nginx as a very efficient HTTP load balancer to distribute traffic to several application servers and to improve performance, scalability and reliability of web applications with nginx.
What is Nginx Proxy Manager? It comes as a pre-built docker image that enables you to easily forward to your websites running at home or otherwise, including free SSL, without having to know too much about Nginx or Letsencrypt. Nginx Proxy Manager is a tool in the Load Balancer / Reverse Proxy category of a tech stack.
The ability to update a single microservice without disruption is made possible by load balancing Docker containers. When containers are deployed across a cluster of servers, load balancers running in Docker containers make it possible for multiple containers to be accessed on the same host port.
Docker's DNS is the responsible to do the round robin in this case. Don't use links
option in your compose, it's not necessary. Look, I'm using this example:
docker-compose.yml:
version: '3'
services:
api:
image: my-api-image
client:
image: ubuntu:latest
So I start my application with docker-compose up -d api
and then scale it: docker-compose scale api=10
. Now, inside the client (docker-compose run client bash
):
root@ce3857690292:/# dig api
...
;; QUESTION SECTION:
;api. IN A
;; ANSWER SECTION:
api. 600 IN A 172.19.0.6
api. 600 IN A 172.19.0.9
api. 600 IN A 172.19.0.7
api. 600 IN A 172.19.0.8
api. 600 IN A 172.19.0.11
api. 600 IN A 172.19.0.2
api. 600 IN A 172.19.0.10
api. 600 IN A 172.19.0.3
api. 600 IN A 172.19.0.5
api. 600 IN A 172.19.0.4
With curl you can see the round robin:
root@1719c10f864a:/# curl -vI api
* Rebuilt URL to: api/
* Trying 172.19.0.6...
* Connected to api (172.19.0.6) port 80 (#0)
...
root@1719c10f864a:/# curl -vI api
* Rebuilt URL to: api/
* Trying 172.19.0.7...
* Connected to api (172.19.0.7) port 80 (#0)
...
root@1719c10f864a:/# curl -vI api
* Rebuilt URL to: api/
* Trying 172.19.0.8...
* Connected to api (172.19.0.8) port 80 (#0)
In your case you need to replace the client service in my docker-compose with your nginx and use your api as upstream (without links
)
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With