I am trying to create a Kubernetes pod with a single container which has two external volumes mounted on it. My .yml pod file is:
apiVersion: v1
kind: Pod
metadata:
name: my-project
labels:
name: my-project
spec:
containers:
- image: my-username/my-project
name: my-project
ports:
- containerPort: 80
name: nginx-http
- containerPort: 443
name: nginx-ssl-https
imagePullPolicy: Always
volumeMounts:
- mountPath: /home/projects/my-project/media/upload
name: pd-data
- mountPath: /home/projects/my-project/backups
name: pd2-data
imagePullSecrets:
- name: vpregistrykey
volumes:
- name: pd-data
persistentVolumeClaim:
claimName: pd-claim
- name: pd2-data
persistentVolumeClaim:
claimName: pd2-claim
I am using Persistent Volumes and Persisten Volume Claims, as such:
PV
apiVersion: v1
kind: PersistentVolume
metadata:
name: pd-disk
labels:
name: pd-disk
spec:
capacity:
storage: 250Gi
accessModes:
- ReadWriteOnce
gcePersistentDisk:
pdName: "pd-disk"
fsType: "ext4"
PVC
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: pd-claim
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 250Gi
I have initially created my disks using the command:
$ gcloud compute disks create --size 250GB pd-disk
Same goes for the second disk and second PV and PVC. Everything seems to work ok when I create the pod, no errors are thrown. Now comes the weird part: one of the paths is being mounted correctly (and is therefor persistent) and the other one is being erased every time I restart the pod...
I have tried re-creating everything from scratch, but nothing changes. Also, from the pod description, both volumes seem to be correctly mounted:
$ kubectl describe pod my-project
Name: my-project
...
Volumes:
pd-data:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: pd-claim
ReadOnly: false
pd2-data:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: pd2-claim
ReadOnly: false
Any help is appreciated. Thanks.
Figure 7.4 A pod can contain multiple volumes and a container can mount multiple volumes.
A Volume in Kubernetes represents a directory with data that is accessible across multiple containers in a Pod. The container data in a Pod is deleted or lost when a container crashes or restarts, but when you use a volume, the new container can pick up the data at the state before the container crashes.
Currently, based on only these two attributes, a PVC is bound to a single PV. Once a PV is bound to a PVC, that PV is essentially tied to the PVC's project and cannot be bound to by another PVC. There is a one-to-one mapping of PVs and PVCs. However, multiple pods in the same project can use the same PVC.
The Kubernetes documentation states:
Volumes can not mount onto other volumes or have hard links to other volumes
I had the same issue and in my case the problem was that both volume mounts had overlapping mountPaths, i.e. both started with /var/.
They mounted without issues after fixing that.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With