Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

How can you reuse dynamically provisioned PersistentVolumes with Helm on GKE?

I am trying to deploy a helm chart which uses PersistentVolumeClaim and StorageClass to dynamically provision the required sotrage. This works as expected, but I can't find any configuration which allows a workflow like

helm delete xxx

# Make some changes and repackage chart

helm install --replace xxx

I don't want to run the release constantly, and I want to reuse the storage in deployments in the future.

Setting the storage class to reclaimPolicy: Retain keeps the disks, but helm will delete the PVC and orphan them. Annotating the PVC's so that helm does not delete them fixes this problem, but then running install causes the error

Error: release xxx failed: persistentvolumeclaims "xxx-xxx-storage" already exists

I think I have misunderstood something fundamental to managing releases in helm. Perhaps the volumes should not be created in the chart at all.

like image 868
user3125280 Avatar asked Mar 18 '18 04:03

user3125280


People also ask

How do you reclaim persistent volume?

Reclaiming a persistent volume manuallyDelete the PV. The associated storage asset in the external infrastructure, such as an AWS EBS, GCE PD, Azure Disk, or Cinder volume, still exists after the PV is deleted. Clean up the data on the associated storage asset. Delete the associated storage asset.

How are PersistentVolumes provisioned?

PersistentVolume resources can be provisioned dynamically through PersistentVolumeClaims , or they can be explicitly created by a cluster administrator. Caution: Using the [fsGroup] setting with large PersistentVolumes can cause mounts to fail. For more information, see Troubleshooting Volume mount failures.

Can multiple pods use same PVC?

Once a PV is bound to a PVC, that PV is essentially tied to the PVC's project and cannot be bound to by another PVC. There is a one-to-one mapping of PVs and PVCs. However, multiple pods in the same project can use the same PVC.


1 Answers

PersistenVolumeClain creating just a mapping between your actual PersistentVolume and your pod.

Using "helm.sh/resource-policy": keep annotation for PV is not the best idea, because of that remark in a documentation:

The annotation "helm.sh/resource-policy": keep instructs Tiller to skip this resource during a helm delete operation. However, this resource becomes orphaned. Helm will no longer manage it in any way. This can lead to problems if using helm install --replace on a release that has already been deleted, but has kept resources.

If you will create a PV manually after you will delete your release, Helm will remove PVC, which will be marked as "Available" and on next deployment, it will reuse it. Actually, you don't need to keep your PVC in the cluster to keep your data. But, for making it always using the same PV, you need to use labels and selectors.

For keep and reuse volumes you can:

  1. Create PersistenVolume with the label, as an example, for_app=my-app and set "Retain" policy for that volume like this:
apiVersion: v1
kind: PersistentVolume
metadata:
  name: myappvolume
  namespace: my-app
  labels:
    for_app: my-app
spec:
  persistentVolumeReclaimPolicy: Retain
  capacity:
    storage: 5Gi
  accessModes:
    - ReadWriteOnce
  1. Modify your PersistenVolumeClaim configuration in Helm. You need to add a selector for using only PersistenVolumes with a label for_app=my-app.
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: myappvolumeclaim
  namespace: my-app
spec:
  selector:
    matchLabels:
      for_app: my-app
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 5Gi

So, now your application will use the same volume each time when it started.

But, please keep in mind, you may need to use selectors for other apps in the same namespace for preventing using your PV by them.

like image 115
Anton Kostenko Avatar answered Oct 25 '22 01:10

Anton Kostenko