I have a Dockerfile with multiple targets. For example:
FROM x as frontend
...
FROM y as backend
...
FROM z as runtime
...
COPY --from=frontend ...
COPY --from=backend ...
In order to build and tag the final image, I use:
docker build -t my-project .
To build and tag intermediary targets, I provide --target
argument:
docker build -t my-project-backend --target backend .
But is it possible to build a final image and tag all the intermediary images as well? In other words, the same as :
docker build -t my-project-frontend --target frontend .
docker build -t my-project-backend --target backend .
docker build -t my-project .
But with a single command?
I think a bit of explanation required. If use buildkit (export DOCKER_BUILDKIT=1
), then all independent targets are built in parallel. So it's simply faster than building them one by one.
And I need to tag every target to push them to a docker registry as well as final one.
Currently I'm building my images in CI without buildkit and I'm trying to speed up the process a bit.
I did some searching but it seems that the docker
CLI currently just does not offer any straight forward way to do this. The closest thing is the idea I proposed in my comment: Build the main image and tag all intermediate images afterwards.
Take this Dockerfile
as an example:
FROM alpine AS frontend
RUN sleep 15 && touch /frontend
FROM alpine AS backend
RUN sleep 15 && touch /backend
FROM alpine AS runtime
COPY --from=frontend /frontend /frontend
COPY --from=backend /backend /backend
(the sleep
s are only there to make the speedup by caching obvious)
Building this with:
export DOCKER_BUILDKIT=1 # enable buildkit for parallel builds
docker build -t my-project .
docker build -t my-project-backend --target backend .
docker build -t my-project-frontend --target frontend .
will
runtime
by first building all required intermediate images, e.g. frontend
and backend
, and tag only the main image with my-project
backend
tagged as my-project-backend
but using the cache from the previous buildbackend
Every image here will only be built once - but ultimately this is the very same you already did as stated in your question, just in a different order.
If you really want to be able to do this in a single command you could use docker-compose
to build the "multiple images":
version: "3.8"
services:
my-project:
image: my-project
build: .
backend:
image: my-project-backend
build:
context: .
target: backend
frontend:
image: my-project-frontend
build:
context: .
target: frontend
export DOCKER_BUILDKIT=1 # enable buildkit for parallel builds
export COMPOSE_DOCKER_CLI_BUILD=1 # use docker cli for building
docker-compose build
Here docker-compose
will basically run the same docker build
commands as above for you.
In both cases though you should be aware that although the cached layers massively speed up the build there is still a new build taking place which will each time:
ADD
to the image
and only use the cache if the contents are the same again - which for large files/slow network will be a noticeable slow down.Another workaround I found in this forum thread was to add a LABEL
to the image and use docker image ls --filter
to get the image IDs after the build.
But testing this it seems docker image ls
won't show intermediate images when using buildkit
. Also this approach would required more commands / a dedicated script - which would be again more work than your current approach.
The closest you'll get to this right now is with Docker's buildx bake command. It allows you to define an HCL file with syntax like:
group "default" {
targets = ["app", "frontend", "backend"]
}
target "app" {
dockerfile = "Dockerfile"
tags = ["docker.io/username/app"]
}
target "frontend" {
dockerfile = "Dockerfile"
target = "frontend"
tags = ["docker.io/username/frontend"]
}
target "backend" {
dockerfile = "Dockerfile"
target = "backend"
tags = ["docker.io/username/backend"]
}
And then you would build with docker buildx bake -f bake.hcl
That said, what you are doing is almost certainly a mistake. A multi-stage build is designed to separate the build environments from the runtime environments, not to create multiple distinct images. In other words, you're using a hammer when you need a screwdriver, yes it will work, but the result is suboptimal.
The preferred and much simpler solution is to create a separate Dockerfile for each image you want to build. If your images have a common base, then consider moving that out to it's own image, and referencing that in your FROM
step.
To build multiple images in docker as a developer, it's common to use a docker-compose.yml
file that defines all three images, and then docker-compose up --build
will start the entire stack after building each of the images, with a single command. E.g. the compose file may look like:
version: 2
services:
app:
build: Dockerfile.app
image: username/app
# ...
frontend:
build: Dockerfile.frontend
image: username/frontend
# ...
backend:
build: Dockerfile.backend
image: username/backend
# ...
And for deploying to production, this would be separate CI/CD pipelines for each image to perform the needed unit tests, build, and then fan-in to a deployment step that runs the entire stack with the specified releases of each image.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With