I am using a PVC with ReadWriteOnce access mode, which is used by a logstash Deployment which will run a stateful application and use this PVC.Each pod in the deployment will try to bind to the same persistent volume claim. In case of replicas > 1, it will fail (as it supports ReadWriteOnce, only the first one will be able to bind successfully). How do I specify that each pod is to be bound to a separate PV.
I don't want to define 3 separate yamls for each logstash replica / instance
apiVersion: apps/v1
kind: Deployment
metadata:
name: logstash
spec:
replicas: 3
template:
metadata:
labels:
app: logstash
spec:
containers:
image: "logstash-image"
imagePullPolicy: IfNotPresent
name: logstash
volumeMounts:
- mountPath: /data
name: logstash-data
restartPolicy: Always
volumes:
- name: logstash-data
persistentVolumeClaim:
claimName: logstash-vol
Need a way to do volume mount of different PVs to different pod replicas.
The mapping between persistentVolume and persistentVolumeClaim is always one to one. Even When you delete the claim, PersistentVolume still remains as we set persistentVolumeReclaimPolicy is set to Retain and It will not be reused by any other claims.
With Deployments you cannot do this properly. You should use StatefulSet with PVC template to achieve your target. The part of your StatefulSet YAML code snippet could look like this:
...
volumeClaimTemplates:
- metadata:
name: pv-data
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 5G
assuming you have 3 replicas, you will see the pods are created one by one sequentially, and the PVC is requested during the pod creation.
The PVC is named as
volumeClaimTemplate name + pod-name + ordinal number
and as result, you will have the list of newly created PVCs:
pv-data-<pod_name>-0
pv-data-<pod_name>-1
pv-data-<pod_name>-N
StatefulSet makes the names (not only names in fact) of your pods static and increments them depending on replica count, thats why every Pod will match its own PVC and PV respectively
Note: this is called dynamic provisioning. You should be familiar with configuring kubernetes control plane components (like controller-manager) to achieve this, because you will need configured persistent storage (one of them) providers and understand the retain policy of your data, but this is completely another question...
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With