I setup a new k8s in a single node, which is tainted. But the PersistentVolume
can not be created successfully, when I am trying to create a simple PostgreSQL.
There is some detail information below.
The StorageClass
is copied from the official page: https://kubernetes.io/docs/concepts/storage/storage-classes/#local
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: local-storage
provisioner: kubernetes.io/no-provisioner
volumeBindingMode: WaitForFirstConsumer
The StatefulSet
is:
kind: StatefulSet
apiVersion: apps/v1beta1
metadata:
name: postgres
spec:
serviceName: postgres
replicas: 1
...
volumeClaimTemplates:
- metadata:
name: postgres-data
spec:
storageClassName: local-storage
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
About the running StorageClass
:
$ kubectl describe storageclasses.storage.k8s.io
Name: local-storage
IsDefaultClass: No
Annotations: kubectl.kubernetes.io/last-applied-configuration={"apiVersion":"storage.k8s.io/v1","kind":"StorageClass","metadata":{"annotations":{},"name":"local-storage"},"provisioner":"kubernetes.io/no-provisioner","volumeBindingMode":"WaitForFirstConsumer"}
Provisioner: kubernetes.io/no-provisioner
Parameters: <none>
AllowVolumeExpansion: <unset>
MountOptions: <none>
ReclaimPolicy: Delete
VolumeBindingMode: WaitForFirstConsumer
Events: <none>
About the running PersistentVolumeClaim
:
$ kubectl describe pvc
Name: postgres-data-postgres-0
Namespace: default
StorageClass: local-storage
Status: Pending
Volume:
Labels: app=postgres
Annotations: <none>
Finalizers: [kubernetes.io/pvc-protection]
Capacity:
Access Modes:
VolumeMode: Filesystem
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal WaitForFirstConsumer <invalid> (x2 over <invalid>) persistentvolume-controller waiting for first consumer to be created before binding
K8s versions:
$ kubectl version
Client Version: version.Info{Major:"1", Minor:"13", GitVersion:"v1.13.4", GitCommit:"c27b913fddd1a6c480c229191a087698aa92f0b1", GitTreeState:"clean", BuildDate:"2019-02-28T13:37:52Z", GoVersion:"go1.11.5", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"13", GitVersion:"v1.13.1", GitCommit:"eec55b9ba98609a46fee712359c7b5b365bdd920", GitTreeState:"clean", BuildDate:"2018-12-13T10:31:33Z", GoVersion:"go1.11.2", Compiler:"gc", Platform:"linux/amd64"}
Using the WaitForFirstConsumer VolumeBindingMode, the creation and binding of a Persistent Volume for a PVC is delayed until a pod that uses the PVC is scheduled and created. This way, volumes are created to meet the scheduling constraints that are enforced by topology requirements.
Create a new StorageClassCreate a manifest for a new StorageClass. Include the storageclass.kubernetes.io/is-default-class: "true" annotation. For example: apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: annotations: storageclass.kubernetes.io/is-default-class: "true" ...
A persistent volume is a piece of storage in a cluster that an administrator has provisioned. It is a resource in the cluster, just as a node is a cluster resource. A persistent volume is a volume plug-in that has a lifecycle independent of any individual pod that uses the persistent volume.
volumeClaimTemplates is a list of claims that pods are allowed to reference. The StatefulSet controller is responsible for mapping network identities to claims in a way that maintains the identity of a pod. Every claim in this list must have at least one matching (by name) volumeMount in one container in the template.
This issue mainly happens with WaitForFirstConsumer when you define the nodeName in the Deployment/Pod specifications. Please make sure you don't define nodeName and hardbind the pod through it. The should be resolved once you remove nodeName.
special types such as PersistentVolumeClaim, secret, and gitRepo emptyDir and hostPath are attached to pods directly and can store data only while such a pod is alive, while cloud volumes, NFS, and PersistentVolume are independent of pods and will store data until such a volume will be deleted.
The cluster admin, instead of creating PersistentVolumes, can deploy a PersistentVolume provisioner and define one or more StorageClass objects to let users choose what type of PersistentVolume they want.
When a cluster user needs to use persistent storage in one of their pods, they first create a PersistentVolumeClaim manifest, specifying the minimum size and the access mode they require.
The app is waiting for the Pod, while the Pod is waiting for a PersistentVolume
by a PersistentVolumeClaim
.
However, the PersistentVolume
should be prepared by the user before using.
My previous YAMLs are lack of a PersistentVolume
like this:
kind: PersistentVolume
apiVersion: v1
metadata:
name: postgres-data
labels:
type: local
spec:
storageClassName: local-storage
capacity:
storage: 1Gi
local:
path: /data/postgres
persistentVolumeReclaimPolicy: Retain
accessModes:
- ReadWriteOnce
storageClassName: local-storage
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: app
operator: In
values:
- postgres
The local path /data/postgres
should be prepared before using.
Kubernetes will not create it automatically.
I just ran into this myself and was completely thrown for a loop until I realized that the StorageClass
's VolumeBindingMode
was set to WaitForFirstConsumer
vice my intended value of Immediate
. This value is immutable so you will have to:
Get the storage class yaml:
kubectl get storageclasses.storage.k8s.io gp2 -o yaml > gp2.yaml
or you can also just copy the example from the docs here (make sure the metadata names match). Here is what I have configured:
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: gp2
provisioner: kubernetes.io/aws-ebs
parameters:
type: gp2
reclaimPolicy: Delete
allowVolumeExpansion: true
mountOptions:
- debug
volumeBindingMode: Immediate
And delete the old StorageClass
before recreating it with the new volumeBindingMode
set to Immediate
.
Note: The EKS clsuter may need perms to create cloud resources like EBS or EFS. Assuming EBS you should be good with arn:aws:iam::aws:policy/AmazonEKSClusterPolicy
.
After doing this you should have no problem creating and using dynamically provisioned PVs.
For me the problem was mismatched accessModes
fields in the PV and PVC. PVC was requesting RWX
/ReadWriteMany
while PV was offering RWO
/ReadWriteOnce
.
The accepted answer didn't work for me. I think it's because the app key won't be set before the the StatefulSet's Pods are deployed, preventing the PersistentVolumeClaim to match the nodeSelector (preventing the Pods to start with the error didn't find available persistent volumes to bind.
). To fix this deadlock, I defined one PersistentVolume for each node (this may not be ideal but it worked):
apiVersion: v1
kind: PersistentVolume
metadata:
name: postgres-data-node1
labels:
type: local
spec:
[…]
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- node1
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With