I'm totally new to Docker so I appreciate your patience.
I'm looking for a way to deploy multiple containers with the same image, however I need to pass in a different config (file) to each?
Right now, my understanding is that once you build an image, that's what gets deployed, but the problem for me is that I don't see the point in building multiple images of the same application when it's only the config that is different between the containers.
If this is the norm, then I'll have to deal with it however if there's another way then please put me out of my misery! :)
Thanks!
When a Docker user runs an image, it becomes one or multiple container instances. The container's initial state can be whatever the developer wants — it might have an installed and configured web server, or nothing but a bash shell running as root.
Just publish the ssh service on different host ports. E.g., start one container with -p 2221:22 and another with -p 2222:22 . Now you have two ssh containers running, one reachable on host port 2221 and another on host port 2222 . Save this answer.
Strictly speaking, no. A container is built from an image, not multiple images. However, images are built from image layers. So, you can take an image and extend it by using the image's name/path as the base and add your own commands or layers.
By default, the container is assigned an IP address for every Docker network it connects to.
I think looking at examples which are easy to understand could give you the best picture.
What you want to do is perfectly valid, an image should be anything you need to run, without the configuration.
To generate the configuration, you either:
use volumes and mount the file during container start docker run -v my.ini:/etc/mysql/my.ini percona
(and similar with docker-compose
). Be aware, you can repeat this as often as you like, so mount several configs into your container (so the runtime-version of the image). You will create those configs on the host before running the container and need to ship those files with the container, which is the downside of this approach (portability)
Most of the advanced docker images do provide a complex so called entry-point which consumes ENV variables you pass when starting the image, to create the configuration(s) for you, like https://github.com/docker-library/percona/blob/master/5.7/docker-entrypoint.sh
so when you run this image, you can do docker run -e MYSQL_DATABASE=myapp percona
and this will start percona and create the database percona for you. This is all done by
Of course, you can do whatever you like with this. E.g this configures a general portus image: https://github.com/EugenMayer/docker-rancher-extra-catalogs/blob/master/templates/registry-slim/11/docker-compose.yml which has this entrypoint https://github.com/EugenMayer/docker-image-portus/blob/master/build/startup.sh
So you see, the entry-point strategy is very common and very powerful and i would suppose to go this route whenever you can.
Maybe for "completeness", the image-derive strategy, so you have you base image called "myapp" and for the installation X you create a new image
from myapp COPY my.ini /etc/mysql/my.ini COPY application.yml /var/app/config/application.yml
And call this image myapp:x - the obvious issue with this is, you end up having a lot of images, on the other side, compared to a) its much more portable.
Hope that helps
Just run from the same image as many times as needed. New containers will be created and they can then be started and stoped each one saving its own configuration. For your convenience would be better to give each of your containers a name with "--name".
F.i:
docker run --name MyContainer1 <same image id> docker run --name MyContainer2 <same image id> docker run --name MyContainer3 <same image id>
That's it.
$ docker ps CONTAINER ID IMAGE CREATED STATUS NAMES a7e789711e62 67759a80360c 12 hours ago Up 2 minutes MyContainer1 87ae9c5c3f84 67759a80360c 12 hours ago Up About a minute MyContainer2 c1524520d864 67759a80360c 12 hours ago Up About a minute MyContainer3
After that you have your containers created forever and you can start and stop them like VMs.
docker start MyContainer1
Each container runs with the same RO image but with a RW container specific filesystem layer. The result is each container can have it's own files that are distinct from every other container.
You can pass in configuration on the CLI, as an environment variable, or as a unique volume mount. It's a very standard use case for Docker.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With