Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

statefulset unable to rollback if the pods are not in running state

I have deployed mongo stateful pods with an auto rolling strategy and below is the template for it. The deployment is successful and the pods are into Running state.

- apiVersion: apps/v1beta1
  kind: StatefulSet
  metadata:
    name: mongo
  spec:
    serviceName: "mongo"
    podManagementPolicy: Parallel
    replicas: 3
    strategy:
      type: Rolling
    template:
      metadata:
        labels:
          role: mongo
          environment: test
      spec:
        terminationGracePeriodSeconds: 10
        containers:
          - name: mongo
            image: mongo:4.0
            imagePullPolicy: Always
            command:
              - mongod
              - "--replSet"
              - rs0
              - "--bind_ip"
              - 0.0.0.0
              - "--smallfiles"
              - "--noprealloc"
            ports:
              - containerPort: 27017
            volumeMounts:
              - name: mongo-persistent-storage
                mountPath: /data/db
          - name: mongo-sidecar
            image: cvallance/mongo-k8s-sidecar
            env:
              - name: MONGO_SIDECAR_POD_LABELS
                value: "role=mongo,environment=test"
    updateStrategy:
      type: RollingUpdate

I am trying to update the image of the mongo using the following set command,

oc set image statefulset/mongo mongo=mongo:4.2 -n mongo-replica

While trying to update the image, the pods are into "CrashLoopBackOff" error. I am expecting the pods to be auto rolled back to the previous running version.

But the pods are struck in "CrashLoopBackOff" error state. I want the pods to be rolled back to the previous running version. Any suggestions here would be appreciated.

like image 807
Bhavani Prasad Avatar asked Nov 16 '22 09:11

Bhavani Prasad


1 Answers

Statefulset unfortunately don't have a Rollback, but you can warranty your services using the probes, having a well configure Liveness and Readiness probes the changed version will only take the place of the running version with the probes answering an ok status.

In that way only one of your 3 replicas will crash in a failure, and you can work on it to solve the problem or manually rollback your changes, but without losing the delivery of your service.

More detail about this you can see on the k8s documentation: https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/#forced-rollback

About the probes, you can get a good explanation about it here: https://www.openshift.com/blog/liveness-and-readiness-probes

like image 110
TomIharaRznde Avatar answered Dec 06 '22 15:12

TomIharaRznde