I am trying to deploy a helm chart which uses PersistentVolumeClaim
and StorageClass
to dynamically provision the required sotrage. This works as expected, but I can't find any configuration which allows a workflow like
helm delete xxx
# Make some changes and repackage chart
helm install --replace xxx
I don't want to run the release constantly, and I want to reuse the storage in deployments in the future.
Setting the storage class to reclaimPolicy: Retain
keeps the disks, but helm will delete the PVC and orphan them. Annotating the PVC's so that helm does not delete them fixes this problem, but then running install causes the error
Error: release xxx failed: persistentvolumeclaims "xxx-xxx-storage" already exists
I think I have misunderstood something fundamental to managing releases in helm. Perhaps the volumes should not be created in the chart at all.
Reclaiming a persistent volume manuallyDelete the PV. The associated storage asset in the external infrastructure, such as an AWS EBS, GCE PD, Azure Disk, or Cinder volume, still exists after the PV is deleted. Clean up the data on the associated storage asset. Delete the associated storage asset.
PersistentVolume resources can be provisioned dynamically through PersistentVolumeClaims , or they can be explicitly created by a cluster administrator. Caution: Using the [fsGroup] setting with large PersistentVolumes can cause mounts to fail. For more information, see Troubleshooting Volume mount failures.
Once a PV is bound to a PVC, that PV is essentially tied to the PVC's project and cannot be bound to by another PVC. There is a one-to-one mapping of PVs and PVCs. However, multiple pods in the same project can use the same PVC.
PersistenVolumeClain creating just a mapping between your actual PersistentVolume and your pod.
Using "helm.sh/resource-policy": keep
annotation for PV is not the best idea, because of that remark in a documentation:
The annotation "helm.sh/resource-policy": keep instructs Tiller to skip this resource during a helm delete operation. However, this resource becomes orphaned. Helm will no longer manage it in any way. This can lead to problems if using helm install --replace on a release that has already been deleted, but has kept resources.
If you will create a PV manually after you will delete your release, Helm will remove PVC, which will be marked as "Available" and on next deployment, it will reuse it. Actually, you don't need to keep your PVC in the cluster to keep your data. But, for making it always using the same PV, you need to use labels and selectors.
For keep and reuse volumes you can:
for_app=my-app
and set "Retain" policy for that volume like this:apiVersion: v1
kind: PersistentVolume
metadata:
name: myappvolume
namespace: my-app
labels:
for_app: my-app
spec:
persistentVolumeReclaimPolicy: Retain
capacity:
storage: 5Gi
accessModes:
- ReadWriteOnce
for_app=my-app
.apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: myappvolumeclaim
namespace: my-app
spec:
selector:
matchLabels:
for_app: my-app
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 5Gi
So, now your application will use the same volume each time when it started.
But, please keep in mind, you may need to use selectors for other apps in the same namespace for preventing using your PV by them.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With