so I have the following problem. I'm using docker-compose to build and start two container. I did this multiple times with different docker-compose.yml files (image and container name differ) and it worked fine and three container run parallel. The only difference is, that one container exposes a specific port and the other runs an application that connects to a specific endpoint. So in total the containers are not that different, but they are.
But now I created three additional compose configurations and tried to run them parallel like I'm already doing with the three others. The Problem now is, that with docker-compose, one container is being build and started. But the second one will stop the created container and recreate it.
I tried to do docker-compose build --no-cache
and after that docker-compose up -d
, but I still ended up with the same problem. The images though were different (ID). Before that I tried it just with docker-compose up -d --build
for the first and second (new) container and it would be recreated like mentioned. But looking at the images, they would get the same ID (but different name).
So I thought docker has a problem with caching. That's why I ended up deleting all my container and images and started from scratch with the option --no-cache
like mentioned above. Didn't work though.
Here are two docker-compose.yml that work:
version: '2'
services:
ruby:
security_opt:
- seccomp:unconfined
build:
context: .
dockerfile: Dockerfile_ruby
args:
- http_proxy=http://someIP:3128
- https_proxy=http://someIP:3128
image: ruby_foo_ge01
container_name: ruby_container_ge01
volumes:
- /home/foo/log/GE01/:/usr/src/app/log/
ssl:
security_opt:
- seccomp:unconfined
build:
context: .
dockerfile: Dockerfile_ssl
args:
- http_proxy=http://someIP:3128
- https_proxy=http://someIP:3128
image: ssl_ge01
container_name: ssl_container_ge01
volumes:
- /home/foo/log/GE01/nginx/:/var/log/nginx/
ports:
- "3003:443"
links:
- ruby
and the other one:
version: '2'
services:
ruby:
security_opt:
- seccomp:unconfined
build:
context: .
dockerfile: Dockerfile_ruby
args:
- http_proxy=http://someIP:3128
- https_proxy=http://someIP:3128
image: ruby_foo
container_name: ruby_container
volumes:
- /home/foo/log/:/usr/src/app/log/
ssl:
security_opt:
- seccomp:unconfined
build:
context: .
dockerfile: Dockerfile_ssl
args:
- http_proxy=http://someIP:3128
- https_proxy=http://someIP:3128
image: ssl_gt01
container_name: ssl_container
volumes:
- /home/foo/log/nginx/:/var/log/nginx/
ports:
- "3001:443"
links:
- ruby
Running this two and one other quite similar with docker-compose up -d --build
is no problem.
And here are two .yml files of the container that fail:
version: '2'
services:
ruby:
security_opt:
- seccomp:unconfined
build:
context: .
dockerfile: Dockerfile_ruby
args:
- http_proxy=http://someIP:3128
- https_proxy=http://someIP:3128
image: ruby_foo_websock_gt01
container_name: ruby_containerWSgt01
volumes:
- /home/foo/websockGT01/log/:/usr/src/app/log/
ssl:
security_opt:
- seccomp:unconfined
build:
context: .
dockerfile: Dockerfile_ssl
args:
- http_proxy=http://someIP:3128
- https_proxy=http://someIP:3128
image: ssl_websock_gt01
container_name: ssl_containerWSgt01
volumes:
- /home/foo/websockGT01/log/nginx/:/var/log/nginx/
ports:
- "3010:443"
links:
- ruby
And the second one:
version: '2'
services:
ruby:
security_opt:
- seccomp:unconfined
build:
context: .
dockerfile: Dockerfile_ruby
args:
- http_proxy=http://someIP:3128
- https_proxy=http://someIP:3128
image: ruby_foo_websock_ge01
container_name: ruby_containerWSge01
volumes:
- /home/foo/websockGE01/log/:/usr/src/app/log/
ssl:
security_opt:
- seccomp:unconfined
build:
context: .
dockerfile: Dockerfile_ssl
args:
- http_proxy=http://someIP:3128
- https_proxy=http://someIP:3128
image: ssl_websock_ge01
container_name: ssl_containerWSge01
volumes:
- /home/foo/websockGE01/log/nginx/:/var/log/nginx/
ports:
- "3030:443"
links:
- ruby
As you can see there is no big difference in the working .yml files and the ones that fail. (Or did I miss something?) The image and container names change as does the exposed port and volume path. All of the files have their own working dir where also the application code is saved per instance. All the used Dockerfiles are the same in each working dir.
TL;DR: Why is my docker-compose not starting a new container, but stops a running one and recreates that one? Is there a maximum amount of running container? Have I done something wrong in my .yml file? As mentioned at the beginning --no-cache
does not help.
Kind regards and sorry for that wall of text
Description. The docker run command first creates a writeable container layer over the specified image, and then starts it using the specified command.
If you want to force Compose to stop and recreate all containers, use the --force-recreate flag. If the process encounters an error, the exit code for this command is 1 . If the process is interrupted using SIGINT (ctrl + C) or SIGTERM , the containers are stopped, and the exit code is 0 .
Using the docker-compose CLI command, you can create and start one or more containers for each dependency with a single command (docker-compose up).
Short(ish) answer
It looks as though you're using the same project name some of the time when you run docker-compose up
; since you use the same service names (i.e. ruby
and ssl
) between docker-compose.yml
files, Docker Compose treats the different configurations as modifications of the same service, rather than considering them to be separate services completely.
My guess is that the parent directory of some of the docker-compose.yml
files is the same, so if you want to run these containers at the same time you have a few options:
docker-compose up
, e.g. docker-compose -p project1 up -d
, docker-compose -p project2 up -d
docker-compose.yml
filesLonger answer
From a quick test it appears that Docker Compose uses the project name and service name to identify a specific service. The project name can be set with the -p
flag when running Docker Compose commands, otherwise it uses the value of the COMPOSE_PROJECT_NAME
environment variable if set, and if neither of those are specified it'll default to the name of the parent directory of the docker-compose.yml
file (see https://docs.docker.com/compose/reference/overview).
So, given a directory structure such as:
.
├── a
│ └── z
│ └── docker-compose.yml
├── b
│ └── z
│ └── docker-compose.yml
└── c
└── y
└── docker-compose.yml
Running docker-compose up -d
from the a/z
directory (or using docker-compose -f a/z/docker-compose.yml up -d
) will start ruby
and ssl
services in project z
, named using the container names specified in the docker-compose.yml
.
If you then run docker-compose up -d
from theb/z
directory, Docker Compose will see you trying to again bring up the ruby
and ssl
services in project z
, but this time with some differences, e.g. to names and ports. It will treat this as if you had modified the original docker-compose.yml
, and restart the containers with the new configuration. If you were to now run docker-compose up -d
from the c/y
directory then you would get two new containers running in parallel with the first two, ruby
and ssl
services running in project y
.
So you'll need to ensure that the combination of project and service name differs across the different sets of containers you'd like to run, either by changing the service names or by setting the project differently each time.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With