i am trying nginx reverse proxy multiple container .
i have 2 container which have node add one listen on 8085 and other on 8086 i want them to access by
node.app1.com
node.app2.com
so i used jwilder/nginx-proxy:latest which will sit in fount of both of these container and will act as revers proxy . so here is my compose.yml file.
version: "3"
services:
node-proxy:
build: ./node-proxy
container_name : node-proxy
restart : always
volumes:
- /var/run/docker.sock:/tmp/docker.sock:ro
ports:
- 80:80
- 443:443
node-app1:
build: ./app1
container_name : node-app1
restart: always
environment:
VIRTUAL_HOST: node.app1.com
depends_on:
- node-proxy
node-app2:
build: ./app2
container_name : node-app2
restart: always
environment:
VIRTUAL_HOST: node.app2.com
depends_on:
- node-proxy
FROM jwilder/nginx-proxy:latest
var http = require("http");
http.createServer(function (request, response) {
response.writeHead(200, {'Content-Type': 'text/plain'});
response.end('Hello World 1\n');
}).listen(8085);
FROM node:6.11
WORKDIR /app2
COPY app1.js .
CMD node app1.js
Expose 8085
var http = require("http");
http.createServer(function (request, response) {
response.writeHead(200, {'Content-Type': 'text/plain'});
response.end('Hello World 2\n');
}).listen(8086);
FROM node:6.11
WORKDIR /app2
COPY app2.js .
CMD node app2.js
Expose 8086
So when i do
docker-compose up
all my containers are up and running
but when do node.app1.com --> it say's unknown host .
so to check wether request is coming to proxy i tryed calling http://localhost from browser and it says 503
i also checked nginx.config in side container by
docker exec -it node-proxy_id bash
cat /etc/nginx/conf.d/
and its there but i think when i do node.app1.com request not coming to proxy . i am not getting where i have missed , can some one help me out with this .
Thanks for your time
When you set links
or depends_on
to the other services, docker-compose will set up other services host name as its container_name
in same docker network by default.
In your case, I would suggest to add links
as @Mathias respond for.
version: "3"
services:
node-app1:
build: ./app1
container_name : node-app1
restart: always
expose:
- "8085"
environment:
VIRTUAL_HOST: node.app1.com
node-app2:
build: ./app2
container_name : node-app2
restart: always
expose:
- "8086"
environment:
VIRTUAL_HOST: node.app2.com
node-proxy:
build: ./node-proxy
container_name : node-proxy
restart : always
links:
- node-app1
- node-app2
volumes:
- /var/run/docker.sock:/tmp/docker.sock:ro
ports:
- 80:80
- 443:443
Updated:
I noticed that Nginx config have upstream
directive to hold multiple host names. You should be able to curl the node-app1
as:
$ curl -H 'Host: node.app1.com' localhost
Hello World 1
You can also modify last line in etc/hosts
file in your node-proxy
container as:
172.20.0.4 [docker-network-alias] node.app1.com node.app2.com
Then you should be able to visit http://node.app1.com
directly inside your node-proxy
container.
Here's a tutorial to set up nginx virtual hosts on ubuntu 16 04
Comment I:
In my understanding, the nginx-proxy
is tend to proxy request to back-end service which does not have to register a hostname in /etc/hosts
file. So we fire a request with Host
header which is a virtual hostname in Nginx upstream
block.
The nginx-proxy
have done this part for you when you create the environment variable VIRTUAL_HOST
in each app container setting. But it does not mean we can directly visit the node.app1.com
in our browser and expect the request will be proxied and respond by node-app1
container.
Back to request forwarding part, the request come to localhost
at port 80/443 which is listened by Nginx. Nginx then check the Host
header to go into specific location block. That is why you cannot directly visit http://node.app1.com in your browser, because that hostname is never be really registered in etc/hosts
, so it will never be addressed by any server, app, or our nginx-proxy
.
If we want to visit thay hostname by browser, the extra setting for etc/hosts
is needed.
The nginx-proxy
project supply some template setting, so you might able to get the IP of app containers and its VIRTUAL_HOST
environment then append it to /etc/hosts
file. But in such way this would be directly visit to node app server instead of proxy from nginx-proxy
.
Without production level concern, I would suggest to append app domains in etc/hosts
file at last line which is set by nginx-proxy
, then it should work as you expected. Otherwise the work for hostname dynamic binding from nginx-proxy
templates is necessary.
Look at the ports of your app / Dockerfile:
./app1/app1.js
}).listen(8085);
and
./app1/Dockerfile
Expose 8086
they are missmatching.
The part i was missing is that jwilder/nginx-proxy
is reflecting docker to look for containers that needs to be proxied.
Original post:
I guess that your problem is that reverse proxy container cannot reach each app. Therefor remove the depends_on
from node-app1 and node-app2 and add in node-proxy:
links:
- node-app1
- node-app2
The reverse proxy requires both apps to be started and not the other way around. Also use links
instead of depends_on
.
From the docs:
depends_on
Express dependency between services, which has two effects:
docker-compose up will start services in dependency order. In the following example, db and redis will be started before web.
docker-compose up SERVICE will automatically include SERVICE’s dependencies. In the following example, docker-compose up web will also create and start db and redis.
links
Containers for the linked service will be reachable at a hostname identical to the alias, or the service name if no alias was specified.
Links also express dependency between services in the same way as depends_on, so they determine the order of service startup.
I'm also not sure how you get to the IP addresses of this containers in your proxy config. You can use (as it says in the documentation) the alias or service name instead. (in your case node-app1 and node-app2)
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With