If I want to setup nginx with my docker containers, one option is to setup the nginx instance in my docker-compose.yml
, and link the nginx container to all application containers.
The drawback of this approach, however, is that the docker-compose.yml
becomes server-level, since only one nginx container can expose ports 80/443 to the internet.
I'm interested in being able to define several docker-compose.yml
files on the same server, but still easily expose the public-facing containers in each compose file via a single server-specific nginx container.
I feel this should be pretty easy, but I haven't been able to find a good resource or example for this.
The docker-compose. yml file allows you to configure and document all your application's service dependencies (other services, cache, databases, queues, etc.). Using the docker-compose CLI command, you can create and start one or more containers for each dependency with a single command (docker-compose up).
Use multiple Docker Compose files when you want to change your app for different environments (e.g., dev, staging, and production) or when you want to run admin tasks against a Compose application. This gives us one way to share common configurations.
You can run them all at once by running docker-compose -f a. yml -f b. yml -f c. yml up .
First, you need to create a network for nginx and the proxied containers:
docker network create nginx_network
Next, configure the nginx container in a compose file like this:
services:
nginx:
image: your_nginx_image
ports:
- "80:80"
- "443:443"
networks:
- nginx_network
networks:
nginx_network:
external: true
After that you can run proxied containers:
services:
webapp1:
image: ...
container_name: mywebapp1
networks:
- nginx_network # proxy and app must be in same network
- webapp1_db_network # you can use additional networks for some stuff
database:
image: ...
networks:
- webapp1_db_network
networks:
nginx_network:
external: true
webapp1_db_network: ~ # this network won't be accessible from outside
Also, to make this work you need to configure your nginx properly:
server {
listen 80;
server_name your_app.example.com;
# Docker DNS
resolver 127.0.0.11;
location / {
# hack to prevent nginx to resolve container's host on start up
set $docker_host "mywebapp1";
proxy_pass http://$docker_host:8080;
}
}
You need to tell nginx to use Docker's DNS, so it will be able to access containers by their names.
But note that if you run the nginx container before the others, then nginx will try to resolve another container's host and fail, because the other containers are not running yet. You can use a hack with placing the host into a variable. With this hack, nginx won't try to resolve host until receiving a request.
With this combination you can have nginx always up, while starting and stopping proxied applications independently.
Update:
If you want a more dynamic solution, you can modify the nginx config as follows:
server {
listen 80;
resolver 127.0.0.11;
# define server_name with regexp which will read subdomain into variable
server_name ~^(?<webapp>.+)\.example\.com;
location / {
# use variable from regexp to pass request to desired container
proxy_pass http://$webapp:8080;
}
}
With this configuration, a request to webapp1.example.com will be passed to container "webapp1", webapp2.example.com to "webapp2" etc. You only need to add DNS records and run app containers with right name.
The accepted answer is great, but since I am in the trenches with this right now I'm going to expand upon it with my debugging steps in hopes it helps someone (myself in the future?)
Docker-compose projects often use nginx as reverse-proxy to route http traffic to the other docker services. nginx was a service in my projectfolder/docker-compose.yml
which was connected to two docker networks.
One was the default network created when I used docker-compose up on projectfolder/docker-compose.yml
(It is named projectfolder_default
and services connect to it by default UNLESS you have a networks
property for your service with another network, then make sure you add - default
to the list). When I ran docker network ls
I saw projectfolder_default
in the list and when I ran docker network inspect projectfolder_default
I saw the nginx container, so everything was good.
The other was a network called my_custom_network
that I set up myself. I had a startup script that created it if it did not exist using https://stackoverflow.com/a/53052379/13815107 I needed it in order to talk to the web
service in otherproject/docker-compose.yml
. I had correctly added my_custom_network
to:
nginx
service's networks list of projectfolder/docker-compose.yml
projectfolder/docker-compose.yml
web
service's networks in otherproject/docker-compose.yml
otherproject/docker-compose.yml
The network showed up and had the right containers using docker network ls
and docker network inspect my_custom_network
However, I assumed that proxy_pass to http://web in my server.conf would map to the docker service web.projectfolder_default
. I was mistaken. To test this I opened shell on the nginx container (docker exec -it nginx sh
). When I used ping web
(may need to apt-get update
, apt-get install iputils-ping
) it succeeded, but it printed a url with my_custom_network
which is how I figured out the mistake.
Update: I tried using http://web.my_custom_network
in server.conf.template
and it routed great, but my webapp (Django-based) choked on underscores in the url. I renamed web to web2 in otherproject/docker-compose.yml
and then used something like docker stop otherproject_web
and docker rm otherproject_web
to get rid of the bad one.
projectfolder/docker-compose.yml
services:
# http://web did NOT map to this service!! Use http://web.main_default or change the names
web:
...
nginx:
...
links:
- web
networks:
- default
- my_custom_network
...
networks:
- my_custom_network
external: true
otherproject/docker-compose.yml
services:
# http://web connected to this service instead. You could use http://web.my_custom_network to call it out instead
web:
...
networks:
- default
- my_custom_network
...
networks:
- my_custom_network
external: true
projectfolder/.../nginx/server.conf.template (next to Dockerfile) ...
server {
...
location /auth {
internal;
# This routed to wrong 'web'
proxy_pass http://web:9001;
proxy_pass_request_body off;
proxy_set_header Content-Length "";
}
location / {
alias /data/dist/;
}
location /robots.txt {
alias /robots.txt;
}
# Project Folder backend
location ~ ^/(api|login|logout)/ {
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_connect_timeout 300s;
proxy_read_timeout 300s;
# This routed to wrong 'web'
proxy_pass http://web:9001;
}
# Other project UI
location /other-project {
alias /data/other-project-client/dist/;
}
# Other project Django server
location ~ ^/other-project/(rest)/ {
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_connect_timeout 300s;
proxy_read_timeout 300s;
# nginx will route, but won't work some frameworks like django (it doesn't like underscores)
# I would rename this web2 in yml and use http://web2
proxy_pass http://web.my_custom_network:8000;
}
}
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With