I've a new Docker image and I'd like ideally to perform a smooth upgrade to it and either forget the previous deployed version or keep only the previous version but not all previously deployed versions.
Kubernetes Pods will retrieve upon being restarted the latest image if it's tagged :latest
or imagePullPolicy: Always
.
However unless the image tag changed, doing a kubectl apply
or kubectl replace
will not restart Pods and hence will not trigger pulling the latest image. Tagging it means a complicated script to always remove old tagged images (useless someone has a trick here).
Doing a kubectl rolling-update ... --image ...
is possible if there is a single container per pod only.
What works and is eventually clean and always gets the latest is deleting the namespace and re-creating all pods/rc/services...
How can I ask Kubernetes to use my new images nicely even if there is more than one container per Pod?
If a tag is not specified in the manifest file, Kubernetes will automatically use the image tagged latest.
After the kubectl apply command you can check if the deployment rolled out successfully or not and then, if necessary, the kubectl rollout undo command can rollback to the previous revision. Also, you can use the sleep Linux command to wait some time before that.
Dirty workaround (not tested): you can scale down rc to 0 and then up to original size => it'll be "pod" restart. Or you can use 2 active(non 0 size)/passive(size 0) rc, which will be included in the same service. And you will be scaling them up/down.
Tagging it means a complicated script to always remove old tagged images (useless someone has a trick here).
Tagging is nice explicit process. Kubernetes Garbage collection will delete your old images automatically. Hopefully you know, that if you are using only latest tag, then rollback can be impossible. I recommend to set up tag system, for example :latest_stable, :latest_dev, :2nd_latest_stable, ...
.
These tags will be only "pointers" and your CI will be moving them. Then you can define and script some smart registry delete tag policy, e.g. all tags older than 2nd_latest stable
can be deleted safely. You know your app, so you can set up policy, which will fits your needs and release policy.
Tag example - start point builds 1/2/3 (build id, git id, build time, ...) - build 1 is :production
and :canary
, all tags are pushed:
# docker images
REPOSITORY TAG IMAGE ID CREATED VIRTUAL SIZE
image 3 a21348af4283 37 seconds ago 125.1 MB
image 2 7dda7c549d2d 50 seconds ago 125.1 MB
image production e53856d910b8 58 seconds ago 125.1 MB
image canary e53856d910b8 58 seconds ago 125.1 MB
image 1 e53856d910b8 58 seconds ago 125.1 MB
Build 2 is going to be :canary
:
# docker tag -f image:2 image:canary
# docker push image:canary
# docker images
REPOSITORY TAG IMAGE ID CREATED VIRTUAL SIZE
image 3 a21348af4283 6 minutes ago 125.1 MB
image canary 7dda7c549d2d 6 minutes ago 125.1 MB
image 2 7dda7c549d2d 6 minutes ago 125.1 MB
image production e53856d910b8 7 minutes ago 125.1 MB
image 1 e53856d910b8 7 minutes ago 125.1 MB
Tests OK, build 2 is stable - it'll be :production
:
# docker tag -f image:2 image:production
# docker push image:production
# docker images
REPOSITORY TAG IMAGE ID CREATED VIRTUAL SIZE
image 3 a21348af4283 9 minutes ago 125.1 MB
image 2 7dda7c549d2d 9 minutes ago 125.1 MB
image canary 7dda7c549d2d 9 minutes ago 125.1 MB
image production 7dda7c549d2d 9 minutes ago 125.1 MB
image 1 e53856d910b8 10 minutes ago 125.1 MB
Homework: actually build 2 is not stable -> set :production
to build 1 (rollback) and :canary
to build 3 (test fix in build 3). If you are using only :latest
, this rollback is impossible
kubectl
rolling update/rollback will use explicit :id
and your cleaning script can use policy: all tags older than :production
can be deleted.
Unfortunately I don't have experience with Kubernetes deployment.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With