Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Kubernetes deployment not doing rolling update

I have the following deployment in kubernetes:

 apiVersion: extensions/v1beta1
 kind: Deployment
 metadata:
   labels:
     run: hello-node
   name: hello-node
   namespace: default
 spec:
   replicas: 2
   selector:
     matchLabels:
       run: hello-node
   strategy:
     rollingUpdate:
       maxSurge: 2
       maxUnavailable: 0
     type: RollingUpdate
   template:
     metadata:
       creationTimestamp: null
       labels:
         run: hello-node
     spec:
       containers:
       - image: <image>:<tag>
         imagePullPolicy: Always
         name: hello-node
         livenessProbe:
           httpGet:
             path: /rest/hello
             port: 8081
           initialDelaySeconds: 15
           timeoutSeconds: 1
         ports:
         - containerPort: 8081
           protocol: TCP
         resources:
           requests:
             cpu: 400m
         terminationMessagePath: /dev/termination-log
       dnsPolicy: ClusterFirst
       restartPolicy: Always
       securityContext: {}
       terminationGracePeriodSeconds: 30

The issue is that when I update my deployment to let's say a new version of my image, Kubernetes will instantly kill both pods with the old image, and bring two new pods with the new image. While the new pods are booting up I experience an interruption of service.

Because of the rollingUpdate and the livenessProbe I'm expecting Kubernetes to do the following:

  1. Start one pod with the new image
  2. Wait for the new pod to be healthy based on the livenessProbe
  3. Kill one pod with the old image
  4. Repeat until all pods have been migrated

I am missing something here?

like image 492
phoenix7360 Avatar asked Nov 18 '25 07:11

phoenix7360


1 Answers

What you need is readinessProbe.

The default state of Liveness before the initial delay is Success, whereas the default state of Readiness before the initial delay is Failure.

If you’d like your container to be killed and restarted if a probe fails, then specify a LivenessProbe and a RestartPolicy of Always or OnFailure.

If you’d like to start sending traffic to a pod only when a probe succeeds, specify a ReadinessProbe.

See container probes for more details.

To have the rolling update behavior you described, set maxSurge to 1 (default value). This tells the Deployment to "scale up at most one more replica at a time". See docs of maxSurge for more details.

like image 70
janetkuo Avatar answered Nov 20 '25 09:11

janetkuo



Donate For Us

If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!