Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Error while using local persistent volumes in statefulset pod

I am trying to use local persistent volume mentioned in https://kubernetes.io/blog/2018/04/13/local-persistent-volumes-beta/ for creating my statefulset pod. But when my pod tries to claim volume. I am getting following error :

Events:
  Type     Reason            Age                 From               Message
  ----     ------            ----                ----               -------
  Warning  FailedScheduling  4s (x243 over 20m)  default-scheduler  0/2 nodes are available: 1 node(s) didn't find available persistent volumes to bind, 1 node(s) had taints that the pod didn't tolerate.

Following are storage classes and persistent volumes I have created:

storageclass-kafka-broker.yml

kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
  name: kafka-broker
provisioner: kubernetes.io/no-provisioner
volumeBindingMode: WaitForFirstConsumer

storageclass-kafka-zookeeper.yml

kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
  name: kafka-zookeeper
provisioner: kubernetes.io/no-provisioner
volumeBindingMode: WaitForFirstConsumer

pv-zookeeper.yml

apiVersion: v1
kind: PersistentVolume
metadata:
  name: example-local-pv-zookeeper
spec:
  capacity:
    storage: 2Gi
  accessModes:
  - ReadWriteOnce
  persistentVolumeReclaimPolicy: Retain
  storageClassName: kafka-zookeeper
  local:
    path: /D/kubernetes-mount-path
  nodeAffinity:
    required:
      nodeSelectorTerms:
      - matchExpressions:
        - key: kubernetes.io/hostname
          operator: In
          values:
          - my-node

pv-kafka.yml

apiVersion: v1
kind: PersistentVolume
metadata:
  name: example-local-pv
spec:
  capacity:
    storage: 200Gi
  accessModes:
  - ReadWriteOnce
  persistentVolumeReclaimPolicy: Retain
  storageClassName: kafka-broker
  local:
    path: /D/kubernetes-mount-path
  nodeAffinity:
    required:
      nodeSelectorTerms:
      - matchExpressions:
        - key: kubernetes.io/hostname
          operator: In
          values:
          - my-node

Following is the pod 50pzoo.yml using this volume :

apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: pzoo
  namespace: kafka
spec:
  selector:
    matchLabels:
      app: zookeeper
      storage: persistent
  serviceName: "pzoo"
  replicas: 1
  updateStrategy:
    type: OnDelete
  template:
    metadata:
      labels:
        app: zookeeper
        storage: persistent
      annotations:
    spec:
      terminationGracePeriodSeconds: 10
      initContainers:
      - name: init-config
        image: solsson/kafka-initutils@sha256:18bf01c2c756b550103a99b3c14f741acccea106072cd37155c6d24be4edd6e2
        command: ['/bin/bash', '/etc/kafka-configmap/init.sh']
        volumeMounts:
        - name: configmap
          mountPath: /etc/kafka-configmap
        - name: config
          mountPath: /etc/kafka
        - name: data
          mountPath: /var/lib/zookeeper/data
      containers:
      - name: zookeeper
        image: solsson/kafka:2.0.0@sha256:8bc5ccb5a63fdfb977c1e207292b72b34370d2c9fe023bdc0f8ce0d8e0da1670
        env:
        - name: KAFKA_LOG4J_OPTS
          value: -Dlog4j.configuration=file:/etc/kafka/log4j.properties
        command:
        - ./bin/zookeeper-server-start.sh
        - /etc/kafka/zookeeper.properties
        ports:
        - containerPort: 2181
          name: client
        - containerPort: 2888
          name: peer
        - containerPort: 3888
          name: leader-election
        resources:
          requests:
            cpu: 10m
            memory: 100Mi
        readinessProbe:
          exec:
            command:
            - /bin/sh
            - -c
            - '[ "imok" = "$(echo ruok | nc -w 1 -q 1 127.0.0.1 2181)" ]'
        volumeMounts:
        - name: config
          mountPath: /etc/kafka
        - name: data
          mountPath: /var/lib/zookeeper/data
      volumes:
      - name: configmap
        configMap:
          name: zookeeper-config
      - name: config
        emptyDir: {}
  volumeClaimTemplates:
  - metadata:
      name: data
    spec:
      accessModes: [ "ReadWriteOnce" ]
      storageClassName: kafka-zookeeper
      resources:
        requests:
          storage: 1Gi

Following is the kubectl get events command output

[root@quagga kafka-kubernetes-testing-single-node]# kubectl get events --namespace kafka
LAST SEEN   FIRST SEEN   COUNT     NAME                           KIND                    SUBOBJECT   TYPE      REASON                 SOURCE                        MESSAGE
1m          1m           1         pzoo.15517ca82c7a4675          StatefulSet                         Normal    SuccessfulCreate       statefulset-controller        create Claim data-pzoo-0 Pod pzoo-0 in StatefulSet pzoo success
1m          1m           1         pzoo.15517ca82caed9bc          StatefulSet                         Normal    SuccessfulCreate       statefulset-controller        create Pod pzoo-0 in StatefulSet pzoo successful
13s         1m           9         data-pzoo-0.15517ca82c726833   PersistentVolumeClaim               Normal    WaitForFirstConsumer   persistentvolume-controller   waiting for first consumer to be created before binding
9s          1m           22        pzoo-0.15517ca82cb90238        Pod                                 Warning   FailedScheduling       default-scheduler             0/2 nodes are available: 1 node(s) didn't find available persistent volumes to bind, 1 node(s) had taints that the pod didn't tolerate.

Output of kubectl get pv is :

[root@quagga kafka-kubernetes-testing-single-node]# kubectl get pv
NAME                         CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS      CLAIM     STORAGECLASS      REASON    AGE
example-local-pv             200Gi      RWO            Retain           Available             kafka-broker                4m
example-local-pv-zookeeper   2Gi        RWO            Retain           Available             kafka-zookeeper             4m
like image 253
rishi007bansod Avatar asked Sep 05 '18 11:09

rishi007bansod


People also ask

Is Headless service mandatory for StatefulSet?

For a StatefulSet to work, it needs a Headless Service. A Headless Service does not have an IP address. Internally, it creates the necessary endpoints to expose pods with DNS names. The StatefulSet definition includes a reference to the Headless Service, but you have to create it separately.

How do you reset StatefulSet pods in Kubernetes?

In a second terminal, use kubectl delete to delete all the Pods in the StatefulSet. Wait for the StatefulSet to restart them, and for both Pods to transition to Running and Ready.


2 Answers

it was a silly mistake. I was mentioning my-node in node name values in pv files. Modifying it to correct node name solved my issue.

like image 149
rishi007bansod Avatar answered Sep 21 '22 04:09

rishi007bansod


Thanks for sharing! I did the same mistake. I guess the k8s docu could state this a bit clearer (though it's quite obviuos), so it's a copy paste pitfall.

And to be a bit clearer: if you have a cluster with 3 nodes, then you need to create three different named PVs and provide the correct nodename for 'my-node' (kubectl get nodes). The only reference between your volumeClaimTemplate and your PV is the name of the storage class.

I took something like "local-pv-node-X" as PV name, so when I look at the PV section in the kubernetes dashboard, I can see directly where this volume is located on.

You migh update your listing with the hint on 'my-note ;-)

like image 21
StefanSchubert Avatar answered Sep 24 '22 04:09

StefanSchubert