Assume currently I have 2 PVC
s with ReadWriteOnce
, claimed by Pod #1
and Pod #2
respectively. Both are running on Node #1
.
Next, Pod #2
is updated with the newer Docker image. However, at the same time Pod #3
is run and allocated to Node #1
. Since Node #1
is now full, Pod #2
could only be allocated by Kubernetes to Node #2
.
Since AWS EBS and Google PersistentDisk could only be mounted on single Node, would Pod #2
become unable to connect to the-previously-claimed PVC
?
If yes, how to avoid having this issue?
Yes, that is the downside of pv/pvc with the current storage deliveries of AWS and GCE.
To avoid this, you would have to use a different storage infrastructure which does not have this limitation. Possibilities are CEPH, Gluster, scaleIO (and others). These solutions abstract the storage away from the disks and provide a storage layer which is not node dependent anymore.
This shouldn't be a problem. When Pod #2
is scheduled to Node #2
, Kubernetes should automatically detach the volume from Node #1
and attach it to Node #2
for Pod #2
to use there.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With