Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Docker Compose when to use image over build

Docker Compose documentation, and its example use case were great to figure out the various possibilities you have to split different working environments (development, production, etc.).

web:
  image: example/my_web_app:latest
  links:
    - db
    - cache

db:
  image: postgres:latest

cache:
  image: redis:latest

However, it wasn't very clear to me when to use images over build.

That's the only description available that goes with their only image: example/my_web_app:latest example:

Another common use case is running adhoc or administrative tasks against one or more services in a Compose app. This example demonstrates running a database backup.

The rest of their example cases use build: .

I understand that working with images over building gives you better performance when turning a container up for the first time, since images are already prepared builds. However, I can foresee many issues in doing so:

  • [development] Developers might to change Dockerfile configuration (and they'll need to test it somehow before pushing any changes).
  • [development] Source code files will change (but I guess you can fix that easily by sharing volumes).
  • [production] You might not always want to be at the :latest version (or do you?).
  • [any] by using images (and the :latest tag), you don't have control over the file versions you are touching. But rather everytime you turn up docker-compose up it'll will update to the latest working version.

Some of the previous points might not be completely true. Feel free to dismantle them.

like image 226
zurfyx Avatar asked Feb 10 '17 23:02

zurfyx


2 Answers

Typically you would want to use build . in the following scenarios:

  • development
  • automated testing

This is normally done when you are developing or testing and the code is not production ready. e.g. tests fail, code does not compile, code errors, ect.

Normally you would only create an image when it is ready to ship for deployment. At that point you would create the image, version it up via its tag and push it to your personal DTR or Docker Hub.

When working with versions in docker compose you are not bound to :latest, you can specify any version you want to ensure the proper version is running in any given environment. For example, in production you may want to create a compose file called docker-production.yaml that is configured like so:

web:
  image: "example/my_web_app:${TAG}"
  links:
    - db
    - cache
db:
  image: postgres:9.5.2
cache:
  image: redis:3.0.7

Where ${TAG} is an environment variable that is substituted in at runtime, e.g. docker-compose up -d -f docker-production.yaml. You can read more about variable substitution here.

The power of compose is that you can create build files with variable substitutes that are automatically launched by your build system, no longer limiting you to :latest or even a hardcoded version.

Note:

  1. How teams run their build, ship, deploy varies greatly as they figure out what works best for them and their product so the above build . scenarios may not be accurate for all cases, but is accurate for how my company uses compose.
  2. This assumes build . in a docker-compose context and not a docker build context.
like image 141
GHETTO.CHiLD Avatar answered Sep 28 '22 12:09

GHETTO.CHiLD


As @GHETTO.CHiLD said, it depends on your needs and your work flow. Actually we don't perform manual builds. I'm going to explain how we manage this and why. It fits perfectly in our flow, but could not be suitable in other scenarios.

  • We don't build images manually. The CI does it (GitLab CI in our case)
  • We have 2 types of images, development/testing and production.
  • There is a docker-compose.yml for development that ease the management of the environment. They just run docker-compose up and it pulls the image from the registry and mounts it's directory inside the container.

    version: "2"
    
    services:
      web:
        build: 
          context: ../../
          dockerfile: dockerfiles/dev/Dockerfile
        image: registry.my.domain/my_image:dev
        volumes:
          - ../../:/opt/app
        working_dir: /opt/app
    
  • If they made changes to the Dockerfile (for example, they need a new library), they can build the image in their machine (docker-compose build), but the image is not pushed against the registry.

  • When they're happy, they push the new code (which includes the Dockerfile) and the CI builds the new dev image and run the tests.
  • The CI builds the images in the same host every time, so it can take advantage of caching. If the Dockerfile has no changes, the build takes less than a second.
  • When a new tag is created, the CI builds a production image with the $TAG as the image tag.
  • For production, we use an orchestrator, not a Compose YAML. We don't want to store sensitive data that can be in the docker-compose.yml in the project repository. To upgrade, we just pull the new tag from the registry (we can do this automatically, but I'm not confident yet to deploy to production without human testing before :D).

Of course you can build images every time for development, but there are projects that can take a long time to build. For example, Python3 + pandas can take 25 minutes to build, so it's frustating if you have to switch between projects often. On the other hand, pull an image takes less than a minute.

We use this approach because GitLab give us the CI, the Registry and the runners to build images and run tests. You can do it without it, but you will need to integrate all components on your own. The flow isn't perfect, it has some drawbacks, but are minor in our scenario.

like image 33
charli Avatar answered Sep 28 '22 12:09

charli