Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Running multiple docker-compose files with nginx reverse proxy

I asked a question here and got part of my problem solved, but I was advised to create another question because it started to get a bit lengthy in the comments.

I'm trying to use docker to run multiple PHP,MySQL & Apache based apps on my Mac, all of which would use different docker-compose.yml files (more details in the post I linked). I have quite a few repositories, some of which communicate with one another, and not all of them are the same PHP version. Because of this, I don't think it's wise for me to cram 20+ separate repositories into one single docker-compose.yml file. I'd like to have separate docker-compose.yml files for each repository and I want to be able to use an /etc/hosts entry for each app so that I don't have to specify the port. Ex: I would access 2 different repositories such as http://dockertest.com and http://dockertest2.com (using /etc/hosts entries), rather than having to specify the port like http://dockertest.com:8080 and http://dockertest.com:8081.

Using the accepted answer from my other post I was able to get one app running at a time (one docker-compose.yml file), but if I try to launch another with docker-compose up -d it results in an error because port 80 is already taken. How can I runn multiple docker apps at the same time, each with their own docker-compose.yml files and without having to specify the port in the url?

Here's a docker-compose.yml file for the app I made. In my /etc/hosts I have 127.0.0.1 dockertest.com

version: "3.3"
services:
  php:
    build: './php/'
    networks:
      - backend
    volumes:
      - ./public_html/:/var/www/html/
  apache:
    build: './apache/'
    depends_on:
      - php
      - mysql
    networks:
      - frontend
      - backend
    volumes:
      - ./public_html/:/var/www/html/
    environment:
      - VIRTUAL_HOST=dockertest.com
  mysql:
    image: mysql:5.6.40
    networks:
      - backend
    environment:
      - MYSQL_ROOT_PASSWORD=rootpassword
  nginx-proxy:
    image: jwilder/nginx-proxy
    networks:
      - backend
    ports:
      - 80:80
    volumes:
      - /var/run/docker.sock:/tmp/docker.sock:ro
networks:
  frontend:
  backend:
like image 638
user1104854 Avatar asked Apr 01 '19 00:04

user1104854


People also ask

Can I run 2 Docker compose files?

Using Multiple Docker Compose Files Use multiple Docker Compose files when you want to change your app for different environments (e.g., dev, staging, and production) or when you want to run admin tasks against a Compose application.

Can you run multiple Docker containers at once?

With Docker compose, you can configure and start multiple containers with a single yaml file. This is really helpful if you are working on a technology stack with multiple technologies.

Can we use nginx as reverse proxy?

The benefits of using Nginx as a reverse proxy include: Clients access all backend resources through a single web address. The reverse proxy can serve static content, which reduces the load on application servers such as Express, Tomcat or WebSphere.


1 Answers

I would suggest to extract the nginx-proxy to a separate docker-compose.yml and create a repository for the "reverse proxy" configuration with the following:

A file with extra contents to add to /etc/hosts

127.0.0.1 dockertest.com
127.0.0.1 anothertest.com
127.0.0.1 third-domain.net

And a docker-compose.yml which will have only the reverse proxy

version: "3.3"
services:
  nginx-proxy:
    image: jwilder/nginx-proxy
    ports:
      - 80:80
    volumes:
      - /var/run/docker.sock:/tmp/docker.sock:ro

Next, as you already mentioned, create a docker-compose.yml for each of your repositories that act as web endpoints. You will need to add VIRTUAL_HOST env var to the services that serve your applications (eg. Apache).

The nginx-proxy container can run in "permanent mode", as it has a small footprint. This way whenever you start a new container with VIRTUAL_HOST env var, the configuration of nginx-proxy will be automatically updated to include the new local domain. (You will still have to update /etc/hosts with the new entry).


If you decide to use networks, your web endpoint containers will have to be in the same network as nginx-proxy, so your docker-compose files will have to be modified similar to this:

# nginx-proxy/docker-compose.yml
version: "3.3"
services:
  nginx-proxy:
    image: jwilder/nginx-proxy
    ports:
      - 80:80
    networks:
      - reverse-proxy
    volumes:
      - /var/run/docker.sock:/tmp/docker.sock:ro
networks:
  reverse-proxy:
# service1/docker-compose.yml

version: "3.3"
services:
  php1:
    ...
    networks:
      - backend1
  apache1:
    ...
    networks:
      - nginx-proxy_reverse-proxy
      - backend1
    environment:
      - VIRTUAL_HOST=dockertest.com
  mysql1:
    ...
    networks:
      - backend1
networks:
  backend1:
  nginx-proxy_reverse-proxy:
    external: true
# service2/docker-compose.yml

version: "3.3"
services:
  php2:
    ...
    networks:
      - backend2
  apache2:
    ...
    networks:
      - nginx-proxy_reverse-proxy
      - backend2
    environment:
      - VIRTUAL_HOST=anothertest.com
  mysql2:
    ...
    networks:
      - backend2
networks:
  backend2:
  nginx-proxy_reverse-proxy:
    external: true

The reverse-proxy network that is created in nginx-proxy/docker-compose.yml is referred as nginx-proxy_reverse-proxy in the other docker-compose files because whenever you define a network - its final name will be {{folder name}}_{{network name}}


If you want to have a look at a solution that relies on browser proxy extension instead of /etc/hosts, check out mitm-proxy-nginx-companion

like image 88
Artem Titkov Avatar answered Nov 15 '22 09:11

Artem Titkov