When I force my pod to run on a new Node the persistent volume data (FileSystem) is left behind. How can I move it along with my Pod?
I am deploying portainer with the following yamls:
---
# Source: portainer/templates/namespace.yaml
apiVersion: v1
kind: Namespace
metadata:
name: portainer
---
# Source: portainer/templates/serviceaccount.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: portainer-sa-clusteradmin
namespace: portainer
labels:
app.kubernetes.io/name: portainer
app.kubernetes.io/instance: portainer
---
# Source: portainer/templates/pvc.yaml
kind: "PersistentVolume"
apiVersion: "v1"
metadata:
name: "portainer-pv"
namespace: "portainer"
labels:
app.kubernetes.io/name: portainer
app.kubernetes.io/instance: portainer
spec:
capacity:
storage: "10Gi"
volumeMode: Filesystem
accessModes:
- 'ReadWriteOnce' # Only 1 pod can access at the same time
persistentVolumeReclaimPolicy: "Retain"
hostPath:
path: "/opt/kubernetes/volumes/portainer"
---
# Source: portainer/templates/pvc.yaml
kind: "PersistentVolumeClaim"
apiVersion: "v1"
metadata:
name: portainer-pv-claim
namespace: portainer
spec:
accessModes:
- "ReadWriteOnce"
resources:
requests:
storage: "10Gi"
---
# Source: portainer/templates/rbac.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: portainer
labels:
app.kubernetes.io/name: portainer
app.kubernetes.io/instance: portainer
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
namespace: portainer
name: portainer-sa-clusteradmin
---
# Source: portainer/templates/service.yaml
apiVersion: v1
kind: Service
metadata:
name: portainer
namespace: portainer
labels:
io.portainer.kubernetes.application.stack: portainer
app.kubernetes.io/name: portainer
app.kubernetes.io/instance: portainer
spec:
type: NodePort
ports:
- port: 9000
targetPort: 9000
protocol: TCP
name: http
nodePort: 30777
selector:
app.kubernetes.io/name: portainer
app.kubernetes.io/instance: portainer
---
# Source: portainer/templates/deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: portainer
namespace: portainer
labels:
io.portainer.kubernetes.application.stack: portainer
app.kubernetes.io/name: portainer
app.kubernetes.io/instance: portainer
spec:
replicas: 1
strategy:
type: "Recreate"
selector:
matchLabels:
app.kubernetes.io/name: portainer
app.kubernetes.io/instance: portainer
template:
metadata:
labels:
app.kubernetes.io/name: portainer
app.kubernetes.io/instance: portainer
spec:
nodeSelector:
{}
serviceAccountName: portainer-sa-clusteradmin
volumes:
- name: "data"
persistentVolumeClaim:
claimName: portainer-pv-claim
containers:
- name: portainer
image: "portainer/portainer:2.13.1"
imagePullPolicy: Always
volumeMounts:
- name: data
mountPath: /data # Mount inside the container
ports:
- name: http
containerPort: 9000
protocol: TCP
resources:
{}
On first deployment everything works, but when I tested a migration of my Pod to another Node it just started a new fresh portainer Pod without the retained persistent volume data.
I was expecting the persistent-volume data to move with it to the new Node, but it didn't.
What I did to migrate my pod was:
kubectl cordon {nodeName}kubectl delete pod {podName} -n portainerThen my pod was moved to a new Node, but the persistent volume data got left behind.
How can I make the (FileSytem) persistent volumes migrate along with my Pods incase such an event, of pod migration to a new Node, happens?
Edit:
I also tried like suggested to use 'local' type of PersistentVolume:
kind: "PersistentVolume"
apiVersion: "v1"
metadata:
name: portainer
namespace: portainer
labels:
app.kubernetes.io/name: portainer
app.kubernetes.io/instance: portainer
spec:
capacity:
storage: "10Gi"
volumeMode: Filesystem
accessModes:
- 'ReadWriteOnce' # Only 1 pod can access at the same time
persistentVolumeReclaimPolicy: "Retain"
local:
path: "/opt/kubernetes/volumes/portainer"
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: node-role.kubernetes.io/worker
operator: In
values:
- "true"
But the results were the same
PV which has been created is with hostPath option.
hostPath - HostPath volume (for single node testing only; WILL NOT WORK in a multi-node cluster; consider using local volume instead)
https://kubernetes.io/docs/concepts/storage/persistent-volumes/
You need to create PV with different type persistent volume. you can refer above link which has different types of PV mentioned. Based on you requirement you can choose one.
The problem is the Access Mode defined for your PersistentVolume and PersistentVolumeClaim objects.
The ReadWriteOnce mode does not allow only one Pod at the same time, it actually allows multiple Pods at the same time, but it only allows one node to mount the volume:
ReadWriteOnce
the volume can be mounted as read-write by a single node. ReadWriteOnce access mode still can allow multiple pods to access the volume when the pods are running on the same node.
Hence the loss of the data when the Pod is recreated in another node.
The access mode needed in this situation is ReadWriteMany:
ReadWriteMany
the volume can be mounted as read-write by many nodes.
If your cluster is hosted in Google Kubernetes Engine (GKE), the PersistentVolumeClaim will fail because GKE does not support ReadWriteMany natively. In that case, the option is to use Cloud Filestore as described in this question.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With