I created a PersistentVolume sourced from a Google Compute Engine persistent disk that I already formatted and provision with data. Kubernetes says the PersistentVolume is available.
kind: PersistentVolume apiVersion: v1 metadata: name: models-1-0-0 labels: name: models-1-0-0 spec: capacity: storage: 200Gi accessModes: - ReadOnlyMany gcePersistentDisk: pdName: models-1-0-0 fsType: ext4 readOnly: true
I then created a PersistentVolumeClaim so that I could attach this volume to multiple pods across multiple nodes. However, kubernetes indefinitely says it is in a pending state.
kind: PersistentVolumeClaim apiVersion: v1 metadata: name: models-1-0-0-claim spec: accessModes: - ReadOnlyMany resources: requests: storage: 200Gi selector: matchLabels: name: models-1-0-0
Any insights? I feel there may be something wrong with the selector...
Is it even possible to preconfigure a persistent disk with data and have pods across multiple nodes all be able to read from it?
Normally the PVCs get created and bound to the cluster quickly, however, in some cases the PVCs fail to bind to the cluster, which results in them getting stuck in the Pending state, this prevents the Platform UI deployment from completing.
I quickly realized that PersistentVolumeClaim defaults the storageClassName
field to standard
when not specified. However, when creating a PersistentVolume, storageClassName
does not have a default, so the selector doesn't find a match.
The following worked for me:
kind: PersistentVolume apiVersion: v1 metadata: name: models-1-0-0 labels: name: models-1-0-0 spec: capacity: storage: 200Gi storageClassName: standard accessModes: - ReadOnlyMany gcePersistentDisk: pdName: models-1-0-0 fsType: ext4 readOnly: true --- kind: PersistentVolumeClaim apiVersion: v1 metadata: name: models-1-0-0-claim spec: accessModes: - ReadOnlyMany resources: requests: storage: 200Gi selector: matchLabels: name: models-1-0-0
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With