Let's consider I'm using the Node.js 10.8.0
node:10.8.0-jessie
Docker image as a base image for my application Dockerfile
. The application is running stable in production and is not updated for a while (several months).
The Node.js 10.8.0
image is based on the buildpack-deps:jessie
image, which itself is based on buildpack-deps:jessie-scm
image. This is based on the buildpack-deps:jessie-curl
image, whose base image is debian:jessie
.
System / security updates for Debian Jessie
are released regularly.
In a classic hosted environment I would update my Host using sudo apt-get update && sudo apt-get upgrade
and I'm fine.
But how do I ensure my running Node.js application in the container gets the latest Debian Jessie
updates and patches while staying on Node.js node:10.8.0-jessie
?
Running sudo apt-get update && sudo apt-get upgrade
for my application Dockerfile
in my CI and regularly creating a new Image for my application and re-deploying the container doesn't them the correct way.
As it all starts with the debian:jessie
image, I would expect this to be updated regularly and all depending images as well.
Then I would rebuild my application image by pulling the Node.js 10.8.0
images again (--no-cache
) and re-deploy it.
My questions are: is this assumption correct? Is there any official Docker documentation about that workflow which seems essential to me?
How do I get notified about debian:jessie
and eventually node:10.8.0-jessie
image patch releases?
Docker is the most popular containerization technology. Upon proper use, it can increase the level of security (in comparison to running applications directly on the host). On the other hand, some misconfigurations can lead to downgrade the level of security or even introduce new vulnerabilities.
Docker Official Images impacted by Log4j 2 CVE A number of Docker Official Images contain the vulnerable versions of Log4j 2 CVE-2021-44228. The following table lists Docker Official Images that may contain the vulnerable versions of Log4j 2.
Conclusions. Docker containers are, by default, quite secure; especially if you run your processes as non-privileged users inside the container. You can add an extra layer of safety by enabling AppArmor, SELinux, GRSEC, or another appropriate hardening system.
Your containers are now set up to automatically update whenever you push a new Docker image to your image repository, or when an external maintainer updates an image you're watching.
The preferred workflow is to pull an updated base image, or rebuild your base image if it's locally built. Then rebuild your child images. The only commands you run should be install, not upgrade, if at all possible. For fixing to a specific version of an app, add that version dependency in your install command.
This is preferred over upgrading packages in an existing image for several reasons:
The only scenario where I might perform an upgrade is if an upstream base image is not being maintained. Preferably I'd find a different base image, or build it locally. But when neither of those is possible, I may build a local base image that is a child of the unmaintained external base image, with the first step of upgrading packages. In Dockerfiles, this would look like:
FROM scratch as remote-unmaintained
ADD unmaintained.tgz /
FROM remote-unmaintained as local-base
RUN upgrade-cmd
FROM local-base as app
COPY app /
CMD /app
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With