I have multiple volumes and one claim. How can I tell the claim to which volume to bind to?
How does a PersistentVolumeClaim
know to which volume to bind? Can I controls this using some other parameters or metadata?
I have the following PersistentVolumeClaim
:
{
"apiVersion": "v1",
"kind": "PersistentVolumeClaim",
"metadata": {
"name": "default-drive-claim"
},
"spec": {
"accessModes": [
"ReadWriteOnce"
],
"resources": {
"requests": {
"storage": "10Gi"
}
}
}
}
{
"apiVersion": "v1",
"kind": "PersistentVolume",
"metadata": {
"name": "default-drive-disk",
"labels": {
"name": "default-drive-disk"
}
},
"spec": {
"capacity": {
"storage": "10Gi"
},
"accessModes": [
"ReadWriteOnce"
],
"gcePersistentDisk": {
"pdName": "a1-drive",
"fsType": "ext4"
}
}
}
If I create the claim and the volume using:
kubectl create -f pvc.json -f pv.json
I get the following listing of the volumes and claims:
NAME LABELS CAPACITY ACCESSMODES STATUS CLAIM REASON AGE
default-drive-disk name=default-drive-disk 10Gi RWO Bound default/default-drive-claim 2s
NAME LABELS STATUS VOLUME CAPACITY ACCESSMODES AGE
default-drive-claim <none> Bound default-drive-disk 10Gi RWO 2s
How does the claim know to which volume to bind?
Optional: Bind a PVC to a specific PV A PVC that does not specify a PV name or selector will match any PV. To bind a PVC to a specific PV as a cluster administrator: Use pvc. spec.
If you know exactly what PersistentVolume you want your PersistentVolumeClaim to bind to, you can specify the PV in your PVC using the volumeName field. This method skips the normal matching and binding process. The PVC will only be able to bind to a PV that has the same name specified in volumeName .
Once a PV is bound to a PVC, that PV is essentially tied to the PVC's project and cannot be bound to by another PVC. There is a one-to-one mapping of PVs and PVCs. However, multiple pods in the same project can use the same PVC.
The current implementation does not allow your PersistentVolumeClaim to target specific PersistentVolumes. Claims bind to volumes based on its capabilities (access modes) and capacity.
In the works is the next iteration of PersistentVolumes, which includes a PersistentVolumeSelector on the claim. This would work exactly like a NodeSelector on Pod works. The volume would have to match the label selector in order to bind. This is the targeting you are looking for.
Please see https://github.com/kubernetes/kubernetes/pull/17056 for the proposal containing PersistentVolumeSelector.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With