Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Adding files to standard images using docker-compose

I'm unsure if something obvious escapes me or if it's just not possible but I'm trying to compose an entire application stack with images from docker hub.

One of them is mysql and it supports adding custom configuration files through volumes and to run .sql-files from a mounted directory.

But, I have these files on the machine where I'm running docker-compose, not on the host. Is there no way to specify files from the local machine to copy into the container before it runs it entrypoint/cmd? Do I really have to create local images of everything just for this case?

like image 235
Andreas Wederbrand Avatar asked Sep 08 '16 10:09

Andreas Wederbrand


People also ask

Can you build images using Docker compose?

From your project directory, start up your application by running docker compose up . Compose pulls a Redis image, builds an image for your code, and starts the services you defined. In this case, the code is statically copied into the image at build time.

Does Docker compose update images?

Docker Compose has a built-in pull command that will pull updated versions of all the images in your stack.

How do I handle multiple files in Docker compose?

To use multiple override files, or an override file with a different name, you can pass the -f option to the docker-compose up command. The base Compose file has to be passed on the first position of the command.


2 Answers

Option A: Include the files inside your image. This is less than ideal since you are mixing configuration files with your image (that should really only contain your binaries, not your config), but satisfies the requirement to use only docker-compose to send the files.

This option is achieved by using docker-compose to build your image, and that build will send over any files from the build directory to the remote docker engine. Your docker-compose.yml would look like:

version: '2'  services:   my-db-app:     build: db/.     image: custom-db 

And db/Dockerfile would look like:

FROM mysql:latest COPY ./sql /sql 

The entrypoint/cmd would remain unchanged. You would need to run docker-compose up --build if the image already exists and you need to change the sql files.


Option B: Use a volume to store your data. This cannot be done directly inside of docker-compose. However it's the preferred way to include files from outside of the image into the container. You can populate the volume across the network by using the docker CLI and input redirection along with a command like tar to pack and unpack those files being sent over stdin:

tar -cC sql . | docker run --rm -it -v sql-files:/sql \   busybox /bin/sh -c "tar -xC /sql" 

Run that via a script and then have that same script bounce the db container to reload that config.


Option C: Use some kind of network attached filesystem. If you can configure NFS on the host where you are running your docker CLI, you can connect to those NFS shares from the remote docker node using one of the below options:

# create a reusable volume $ docker volume create --driver local \     --opt type=nfs \     --opt o=addr=192.168.1.1,rw \     --opt device=:/path/to/dir \     foo  # or from the docker run command $ docker run -it --rm \   --mount type=volume,dst=/container/path,volume-driver=local,volume-opt=type=nfs,volume-opt=o=addr=192.168.1.1,volume-opt=device=:/host/path \   foo  # or to create a service $ docker service create \   --mount type=volume,dst=/container/path,volume-driver=local,volume-opt=type=nfs,volume-opt=o=addr=192.168.1.1,volume-opt=device=:/host/path \   foo 

Option D: With swarm mode, you can include files as configs in your image. This allows configuration files, that would normally need to be pushed to any node in the swarm, to be sent on demand to the node where your service is running. This uses a docker-compose.yml file to define it, but swarm mode isn't using docker-compose itself, so this may not fit your specific requirements. You can run a single node swarm mode cluster, so this option is available even if you only have a single node. This option does require that each of your sql files are added as a separate config. The docker-compose.yml would look like:

version: '3.4'  configs:   sql_file_1:     file: ./file_1.sql  services   my-db-app:     image: my-db-app:latest     configs:       - source: sql_file_1         target: /sql/file_1.sql         mode: 444 

Then instead of a docker-compose up, you'd run a docker stack deploy -c docker-compose.yml my-db-stack.

like image 179
BMitch Avatar answered Oct 09 '22 15:10

BMitch


If you can not use volumes (wants stateless docker-compose.yml and using remote machine), you can have config file written by command.

Example for nginx config in official image:

version: "3.7"  services:   nginx:     image: nginx:alpine     ports:       - 80:80     environment:       NGINX_CONFIG: |         server {           server_name "~^www\.(.*)$$" ;           return 301 $$scheme://$$1$$request_uri ;         }         server {           server_name example.com           ...         }     command:       /bin/sh -c "echo \"$$NGINX_CONFIG\" > /etc/nginx/conf.d/redir.conf; nginx -g \"daemon off;\"" 

Environment variable could also be saved in .env file, you can use Compose's extend feature or load it from shell environment (where you fetched it from enywhere else):

https://docs.docker.com/compose/compose-file/#env_file https://docs.docker.com/compose/compose-file/#variable-substitution

To get the original entrypoint command of a container:

docker container inspect [container] | jq --raw-output .[0].Config.Cmd 

To investigate which file to modify this usually will work:

docker exec --interactive --tty [container] sh 
like image 25
Bobík Avatar answered Oct 09 '22 14:10

Bobík