This was discussed by k8s maintainers in https://github.com/kubernetes/kubernetes/issues/7438#issuecomment-97148195:
Allowing users to ask for a specific PV breaks the separation between them
I don't buy that. We allow users to choose a node. It's not the common case, but it exists for a reason.
How did it end? What's the intended way to have >1 PV's and PVC's like the one in https://github.com/kubernetes/kubernetes/tree/master/examples/nfs?
We use NFS, and PersistentVolume is a handy abstraction because we can keep the server
IP and the path
there. But a PersistentVolumeClaim gets any PV with sufficient size, preventing path
reuse.
Can set volumeName
in a PVC spec
block (see https://github.com/kubernetes/kubernetes/pull/7529) but it makes no difference.
Once a PV is bound to a PVC, that PV is essentially tied to the PVC's project and cannot be bound to by another PVC. There is a one-to-one mapping of PVs and PVCs. However, multiple pods in the same project can use the same PVC.
A control loop in the master watches for new PVCs, finds a matching PV (if possible), and binds them together. If a PV was dynamically provisioned for a new PVC, the loop will always bind that PV to the PVC.
Create PVC without a static PV: You can create a PVC based on storage class specifications. If you omit the storage class, it will use the default storage class.
There is a way to pre-bind PVs to PVCs today, here is an example showing how:
$ kubectl create -f pv.yaml persistentvolume "pv0003" created
where pv.yaml
contains: apiVersion: v1 kind: PersistentVolume metadata: name: pv0003 spec: storageClassName: "" capacity: storage: 5Gi accessModes: - ReadWriteOnce persistentVolumeReclaimPolicy: Retain claimRef: namespace: default name: myclaim nfs: path: /tmp server: 172.17.0.2
kind: PersistentVolumeClaim apiVersion: v1 metadata: name: myclaim spec: storageClassName: "" accessModes: - ReadWriteOnce resources: requests: storage: 5Gi
$ kubectl get pvc NAME STATUS VOLUME CAPACITY ACCESSMODES AGE myclaim Bound pv0003 5Gi RWO 4s $ ./cluster/kubectl.sh get pv NAME CAPACITY ACCESSMODES STATUS CLAIM REASON AGE pv0003 5Gi RWO Bound default/myclaim 57s
It can be done using the keyword volumeName:
for example
apiVersion: "v1" kind: "PersistentVolumeClaim" metadata: name: "claimapp80" spec: accessModes: - "ReadWriteOnce" resources: requests: storage: "10Gi" volumeName: "app080"
will claim specific PV app080
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With