Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Kubernetes Pod Warning: 1 node(s) had volume node affinity conflict

I try to set up Kubernetes cluster. I have Persistent Volume, Persistent Volume Claim and Storage class all set-up and running but when I wan to create pod from deployment, pod is created but it hangs in Pending state. After describe I get only this warning "1 node(s) had volume node affinity conflict." Can somebody tell me what I am missing in my volume configuration?

apiVersion: v1 kind: PersistentVolume metadata:   creationTimestamp: null   labels:     io.kompose.service: mariadb-pv0   name: mariadb-pv0 spec:   volumeMode: Filesystem   storageClassName: local-storage   local:     path: "/home/gtcontainer/applications/data/db/mariadb"   accessModes:   - ReadWriteOnce   capacity:     storage: 2Gi   claimRef:     namespace: default     name: mariadb-claim0   nodeAffinity:     required:       nodeSelectorTerms:         - matchExpressions:           - key: kubernetes.io/cvl-gtv-42.corp.globaltelemetrics.eu             operator: In             values:             - master  status: {} 
like image 329
Krzysztof Avatar asked Aug 21 '18 10:08

Krzysztof


People also ask

What is node affinity in Kubernetes?

Node affinity is a set of rules used by the scheduler to determine where a pod can be placed. The rules are defined using custom labels on nodes and label selectors specified in pods. Node affinity allows a pod to specify an affinity (or anti-affinity) towards a group of nodes it can be placed on.

How do you remove PVCs?

You can delete PVCs in using the kubectl delete command or from the F5 Console. To delete using kubectl, specify the PVC either by file or by name.


1 Answers

The error "volume node affinity conflict" happens when the persistent volume claims that the pod is using are scheduled on different zones, rather than on one zone, and so the actual pod was not able to be scheduled because it cannot connect to the volume from another zone. To check this, you can see the details of all the Persistent Volumes. To check that, first get your PVCs:

$ kubectl get pvc -n <namespace> 

Then get the details of the Persistent Volumes (not Volume claims)

$  kubectl get pv 

Find the PVs, that correspond to your PVCs and describe them

$  kubectl describe pv <pv1> <pv2> 

You can check the Source.VolumeID for each of the PV, most likely they will be different availability zone, and so your pod gives the affinity error. To fix this, create a storageclass for a single zone and use that storageclass in your PVC.

kind: StorageClass apiVersion: storage.k8s.io/v1 metadata:   name: region1storageclass provisioner: kubernetes.io/aws-ebs parameters:   type: gp2   encrypted: "true" # if encryption required volumeBindingMode: WaitForFirstConsumer allowedTopologies: - matchLabelExpressions:   - key: failure-domain.beta.kubernetes.io/zone     values:     - eu-west-2b # this is the availability zone, will depend on your cloud provider     # multi-az can be added, but that defeats the purpose in our scenario 
like image 50
Sownak Roy Avatar answered Oct 06 '22 08:10

Sownak Roy