Basically, I'm creating a StatefulSet deployment with 2 pods (single host cluster), I would like to that each pod will be able to mount to a base folder in the host, and to a subfolder beneath it:
Base folder mount: /mnt/disks/ssd
Pod#1 - /mnt/disks/ssd/pod-1
Pod#2 - /mnt/disks/ssd/pod-2
I've managed only to mount the first pod to the base folder, but the 2nd folder cannot mount (as the volume is already taken)
This is the volume:
apiVersion: v1
kind: PersistentVolume
metadata:
name: example-local-pv
spec:
capacity:
storage: 5Gi
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Delete
storageClassName: local-storage
local:
path: /mnt/disks/ssd
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- ubuntukuber
This is the usage in the stateful set:
apiVersion: apps/v1beta1
kind: StatefulSet
metadata:
name: app
namespace: test-ns
spec:
serviceName: app
replicas: 2
....
....
volumeMounts:
- name: data
mountPath: /var/lib/app/data
volumeClaimTemplates:
- metadata:
name: data
spec:
accessModes: [ "ReadWriteOnce" ]
storageClassName: "local-storage"
resources:
requests:
storage: 2Gi
So, i basically would like that each replica would use its own subfolder - how can one achieve it?
== EDIT ==
I've made some progress, i'm able to mount several replicas into the same mount, using the following YAMLs (the app i'm trying to do it on is rabbitmq - so i'll leave the app name as is)
---
kind: PersistentVolume
apiVersion: v1
metadata:
name: pv-local
namespace: test-rabbitmq
labels:
type: local
spec:
storageClassName: local
capacity:
storage: 6Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/mnt/disks"
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: hostpath-pvc
namespace: test-rabbitmq
spec:
storageClassName: local
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 3Gi
selector:
matchLabels:
type: local
---
In the StatefulSet i'm declaring this volume:
volumes:
- name: rabbitmq-data
persistentVolumeClaim:
claimName: hostpath-pvc
And mounting "rabbitmq-data".
Both pods mount to the same folder, but will not create subfolders - this is no terrible situation as by default there are rabbitmq's subfolders - i'll try to expand it into each pod to use a subfolder
You can mount that on multiple pods.
The mapping between persistentVolume and persistentVolumeClaim is always one to one. Even When you delete the claim, PersistentVolume still remains as we set persistentVolumeReclaimPolicy is set to Retain and It will not be reused by any other claims.
A Node can have multiple pods, and the Kubernetes control plane automatically handles scheduling the pods across the Nodes in the cluster.
In other words, more than one pod can use the same PVC. For example, you could have as many pods as you want of a WordPress deployment. This deployment can reference a PVC within the pod's spec. When a user uploads a photo, it will be available to the rest of the pods because all the pods are using the shared volume.
I am able to achieve the above scenario, what you need is "claimRef" in your pv to bind your PVC. Please have a look at following pv json and statefulset json
PV-0.json
{
"kind": "PersistentVolume",
"apiVersion": "v1",
"metadata": {
"name": "pv-data-vol-0",
"labels": {
"type": "local"
}
},
"spec": {
"capacity": {
"storage": "10Gi"
},
"accessModes": [
"ReadWriteOnce"
],
"storageClassName": "local-storage",
"local": {
"path": "/prafull/data/pv-0"
},
"claimRef": {
"namespace": "default",
"name": "data-test-sf-0"
},
"nodeAffinity": {
"required": {
"nodeSelectorTerms": [
{
"matchExpressions": [
{
"key": "kubernetes.io/hostname",
"operator": "In",
"values": [
"ip-10-0-1-46.ec2.internal"
]
}
]
}
]
}
}
}
}
PV-1.json
{
"kind": "PersistentVolume",
"apiVersion": "v1",
"metadata": {
"name": "pv-data-vol-1",
"labels": {
"type": "local"
}
},
"spec": {
"capacity": {
"storage": "10Gi"
},
"accessModes": [
"ReadWriteOnce"
],
"storageClassName": "local-storage",
"local": {
"path": "/prafull/data/pv-1"
},
"claimRef": {
"namespace": "default",
"name": "data-test-sf-1"
},
"nodeAffinity": {
"required": {
"nodeSelectorTerms": [
{
"matchExpressions": [
{
"key": "kubernetes.io/hostname",
"operator": "In",
"values": [
"ip-10-0-1-46.ec2.internal"
]
}
]
}
]
}
}
}
}
Statefulset.json
{
"kind": "StatefulSet",
"apiVersion": "apps/v1beta1",
"metadata": {
"name": "test-sf",
"labels": {
"state": "test-sf"
}
},
"spec": {
"replicas": 2,
"template": {
"metadata": {
"labels": {
"app": "test-sf"
},
"annotations": {
"pod.alpha.kubernetes.io/initialized": "true"
}
}
...
...
},
"volumeClaimTemplates": [
{
"metadata": {
"name": "data"
},
"spec": {
"accessModes": [
"ReadWriteOnce"
],
"storageClassName": "local-storage",
"resources": {
"requests": {
"storage": "10Gi"
}
}
}
}
]
}
}
There will be two pods created test-sf-0 and test-sf-1 which in-turn will be created two PVC data-test-sf-0 and data-test-sf-1 which will be bound to PV-0 and Pv-1 respectively. Hence test-sf-0 will write to the location specified in PV-0 and test-sf-1 will write in location specified on PV-1. Hope this helps.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With