I was playing around with this: http://kubernetes.io/docs/user-guide/deployments/ in my infrastructure. I have a few deployments where I need replicas - but I have a couple where i only want one replica inside the deployment - however having an easy way to change the image version is great and required.
So I tried to see what would happen if you ran a broken update on a deployment with only 1 replica - if we do the following (from the documentation above):
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: nginx-deployment
spec:
replicas: 3
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.7.9
ports:
- containerPort: 80
if we then run kubectl create -f nginx-deployment.yaml
we see 3 healthy running replicas.
If we then change the above file from replicas: 3
to replicas: 1
and run the apply command: kubectl apply -f nginx-deployment.yaml
- we see 1 healthy replica.
Now - if we change image: nginx:1.7.9
to something like image: nginx:1.7.9broken
- and run kubectl apply -f nginx-deployment.yaml
we see something like this:
$ kubectl get rs
NAME DESIRED CURRENT AGE
nginx-deployment-2035384211 0 0 11m <- this is the first one we created with 3 replicas
nginx-deployment-3257237551 1 1 8m <- this is the broken one we made with 1 replica and a bad image name
nginx-deployment-3412426736 0 0 10m <- this is the 2nd one we created with 1 replica
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
nginx-deployment-3257237551-od22j 0/1 ImagePullBackOff 0 19s
So what seems to have happened here is the deployment has run, and created a new broken pod, and destryed the old one - something which the documentation, linked above, tells me should not happen?
My question is - is there some setting i can change so that, even with 1 replica, the deployment will still work as intended, i.e. if the new pod created by the deployment is bad, it will keep the old pod running, or is there something else i should be doing when updating the images of single pods?
note - this all seems to work fine on 2+ many replicas, and i tried to set the maxSurge value to like 5 to see if that made a difference, but it did not.
I believe you want to set maxUnavailable (which defaults to 1) to 0. This should prevent Kubernetes from taking down any existing pods prior bringing a healthy one up. maxSurge
only specifies how many pods exceeding the desired count you are willing to see getting deployed during a rolling upgrade. Since you only tried to roll out a single updated pod in your third deployment, the increase of maxSurge
beyond the default value of 1 did not make a difference.
See also the Rolling Update Deployment section in the documentation.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With