This may be a dumb question but I haven't found much online and want to clarify this.
Given two deployments A and B, both with different container images:
Can I confirm that the above would actually be possible? I.e. two different pods connected to the same volume with the same PVC. So they both are reading from the same volume.
Hope that makes sense...
The mapping between persistentVolume and persistentVolumeClaim is always one to one. Even When you delete the claim, PersistentVolume still remains as we set persistentVolumeReclaimPolicy is set to Retain and It will not be reused by any other claims.
TL;DR You can share PV and PVC within the same project/namespace for shared volumes (nfs, gluster, etc...), you can also access your shared volume from multiple project/namespaces but it will require project dedicated PV and PVCs, as a PV is bound to single project/namespace and PVC is project/namespace scoped.
PVCs are requests for those resources and also act as claim checks to the resource. So a persistent volume (PV) is the "physical" volume on the host machine that stores your persistent data. A persistent volume claim (PVC) is a request for the platform to create a PV for you, and you attach PVs to your pods via a PVC.
the volume can be mounted as read-write by a single node. ReadWriteOnce access mode still can allow multiple pods to access the volume when the pods are running on the same node. ReadOnlyMany. the volume can be mounted as read-only by many nodes.
TL;DR You can share PV and PVC within the same project/namespace for shared volumes (nfs, gluster, etc...), you can also access your shared volume from multiple project/namespaces but it will require project dedicated PV and PVCs, as a PV is bound to single project/namespace and PVC is project/namespace scoped.
Below I've tried to illustrate the current behavior and how PV and PVCs are scoped within OpenShift. These are simple examples using NFS as the persistent storage layer.
the accessModes at this point are just labels, they have no real functionality in terms of controlling access to PV. Below are some examples to show this
the PV is global in the sense that it can be seen/accessed by any project/namespace, HOWEVER once it is bound to a project, it can then only be accessed by containers from the same project/namespace
the PVC is project/namespace specific (so if you have multple projects you would need to have a new PV and PVC for each project to connect to the shared NFS volume - can not reuse the PV from first project)
Example 1:
I have 2 distinct pods running in "default" project/namespace, both accessing the same PV and NFS exported share. Both mount and run fine.
[root@k8dev nfs_error]# oc get pv
NAME LABELS CAPACITY ACCESSMODES STATUS CLAIM REASON AGE
pv-nfs <none> 1Gi RWO Bound default/nfs-claim 3m
[root@k8dev nfs_error]# oc get pods <--- running from DEFAULT project, no issues connecting to PV
NAME READY STATUS RESTARTS AGE
nfs-bb-pod2-pvc 1/1 Running 0 11m
nfs-bb-pod3-pvc 1/1 Running 0 10m
Example 2:
I have 2 distinct pods running in "default" project/namespace and attempt to create another pod using the same PV but from a new project called testproject
to access the same NFS export. The third pod from the new testproject
will not be able to bind to the PV as it is already bound by default
project.
[root@k8dev nfs_error]# oc get pv
NAME LABELS CAPACITY ACCESSMODES STATUS CLAIM REASON AGE
pv-nfs <none> 1Gi RWO Bound default/nfs-claim 3m
[root@k8dev nfs_error]# oc get pods <--- running from DEFAULT project, no issues connecting to PV
NAME READY STATUS RESTARTS AGE
nfs-bb-pod2-pvc 1/1 Running 0 11m
nfs-bb-pod3-pvc 1/1 Running 0 10m
** Create a new claim against the existing PV from another project (testproject) and the PVC will fail
[root@k8dev nfs_error]# oc get pvc
NAME LABELS STATUS VOLUME CAPACITY ACCESSMODES AGE
nfs-claim <none> Pending 2s
** nfs-claim will never bind to the pv-nfs PV because it can not see it from it's current project scope
Example 3:
I have 2 distinct pods running in the "default" project and then create another PV and PVC and Pod from testproject
. Both projects will be able to access the same NFS exported share but I need a PV and PVC in each of the projects.
[root@k8dev nfs_error]# oc get pv
NAME LABELS CAPACITY ACCESSMODES STATUS CLAIM REASON AGE
pv-nfs <none> 1Gi RWX Bound default/nfs-claim 14m
pv-nfs2 <none> 1Gi RWX Bound testproject/nfs-claim2 9m
[root@k8dev nfs_error]# oc get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
default nfs-bb-pod2-pvc 1/1 Running 0 11m
default nfs-bb-pod3-pvc 1/1 Running 0 11m
testproject nfs-bb-pod4-pvc 1/1 Running 0 15s
** notice, I now have three pods running to the same NFS shared volume across two projects, but I needed two PV's as they are bound to a single project, and 2 PVC's, one for each project and the NFS PV I am trying to access
Example 4:
If I by-pass PV and PVC, I can connect to the shared NFS volumes directly from any project using the nfs plugin directly
volumes:
- name: nfsvol
nfs:
path: /opt/data5
server: nfs1.rhs
Now, the volume security is another layer on top of this, using supplementalGroups (for shared storage, i.e. nfs, gluster, etc...), admins and devs should further be able to manage and control access to the shared NFS system.
Hope that helps
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With