Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Docker for local development with multiple environment

I'm looking for using docker to emulate the minimum of our current cloud environment. We have about 10 services (each with your own MySQL8 database, redis, php-fpm and nginx). Currently they have a docker-compose.yml per repository, but they cant talk to each other, if I want to test a feature where a service needs to talk to another I'm out of luck.

My first approach was to create a Dockerfile per service (and run all together using a new docker-compose.yml), using Debian, but i didn't got too far, was able to install nginx, (php-fpm and dependencies), but when I got to the databases it got weird and I had a feeling that this isn't the right way of doing this.

Is there a way to one docker-compose.yml "include" each of docker-compose.yml from the services? Is there a better approach to this? Or should I just keep with the Dockerfile and run them all on the same network using docker-compose?

like image 411
Pedro Alca Avatar asked Oct 31 '25 08:10

Pedro Alca


1 Answers

TL;DR;

You can configure docker-compose using external networks to communicate with services from other projects or (depending on your project) use the -f command-line option / COMPOSE_FILE environment variable to specify the path of the compose file(s) and bring all of the services up inside the same network.


Using external networks

Given the below tree with project a and b:

.
├── a
│   └── docker-compose.yml
└── b
    └── docker-compose.yml

Project a's docker-compose sets a name for the default network:

version: '3.7'
services:
  nginx:
    image: 'nginx'
    container_name: 'nginx_a'
    expose:
    - '80'
networks:
  default:
    name: 'net_a'

And project b is configured to using the named network net_b and the pre-existing net_a external network:

version: '3.7'
services:
  nginx:
    image: 'nginx'
    container_name: 'nginx_b'
    expose:
    - '80'
    networks:
    - 'net_a'
    - 'default'
networks:
  default:
    name: 'net_b'
  net_a:
    external: true

... exec'ing into the nginx_b container we can reach the nginx_a container:

external network service communication

Note: this is a minimalist example. The external network must exist before trying to bring up an environment that is configured with the pre-existing network. Rather than modifying the existing projects docker-compose.yml I'd suggest using overrides.

The configuration gives the nginx_b container a foot inside both networks:

compose services IP's

Using the -f command-line option

Using the -f command-line option acts as an override. It will not work with the above compose files as both specify an nginx service (docker-compose will override / merge the first nginx service with the second).

Using the modified docker-compose.yml for project a:

version: '3.7'
services:
  nginx_a:
    container_name: 'nginx_a'
    image: 'nginx'
    expose:
    - '80'

... and for project b:

version: '3.7'
services:
  nginx_b:
    image: 'nginx'
    container_name: 'nginx_b'
    expose:
    - '80'

... we can bring both of the environments up inside the same network: docker-compose -f a/docker-compose.yml:b/docker-compose.yml up -d:

-f command line option

like image 180
masseyb Avatar answered Nov 03 '25 00:11

masseyb