I am attempting to install postgres via helm using the latest stable and it isn't installing the persistent volume properly. I am installing it in Minikube and for some reason it doesn't appear to be able to hostMount properly.
Error (on the deployment, pod, and replica set)
PersistentVolumeClaim is not bound: "postgres-postgresql" Error: lstat /tmp/hostpath-provisioner/pvc-c713429d-e2a3-11e7-9ca9-080027231d54: no such file or directory Error syncing pod
When I look at the persistent volume it appears to be running properly. In case it helps here is my persistent volume yaml:
{
"kind": "PersistentVolume",
"apiVersion": "v1",
"metadata": {
"name": "pvc-c713429d-e2a3-11e7-9ca9-080027231d54",
"selfLink": "/api/v1/persistentvolumes/pvc-c713429d-e2a3-11e7-9ca9-080027231d54",
"uid": "c71850e1-e2a3-11e7-9ca9-080027231d54",
"resourceVersion": "396568",
"creationTimestamp": "2017-12-16T20:57:50Z",
"annotations": {
"hostPathProvisionerIdentity": "8979806c-dfba-11e7-862f-080027231d54",
"pv.kubernetes.io/provisioned-by": "k8s.io/minikube-hostpath"
}
},
"spec": {
"capacity": {
"storage": "8Gi"
},
"hostPath": {
"path": "/tmp/hostpath-provisioner/pvc-c713429d-e2a3-11e7-9ca9-080027231d54",
"type": ""
},
"accessModes": [
"ReadWriteOnce"
],
"claimRef": {
"kind": "PersistentVolumeClaim",
"namespace": "default",
"name": "postgres-postgresql",
"uid": "c713429d-e2a3-11e7-9ca9-080027231d54",
"apiVersion": "v1",
"resourceVersion": "396550"
},
"persistentVolumeReclaimPolicy": "Delete",
"storageClassName": "standard"
},
"status": {
"phase": "Bound"
}
}
Persistent Volume Claim Yaml:
{
"kind": "PersistentVolumeClaim",
"apiVersion": "v1",
"metadata": {
"name": "postgres-postgresql",
"namespace": "default",
"selfLink": "/api/v1/namespaces/default/persistentvolumeclaims/postgres-postgresql",
"uid": "c713429d-e2a3-11e7-9ca9-080027231d54",
"resourceVersion": "396588",
"creationTimestamp": "2017-12-16T20:57:50Z",
"labels": {
"app": "postgres-postgresql",
"chart": "postgresql-0.8.3",
"heritage": "Tiller",
"release": "postgres"
},
"annotations": {
"control-plane.alpha.kubernetes.io/leader": "{\"holderIdentity\":\"897980a2-dfba-11e7-862f-080027231d54\",\"leaseDurationSeconds\":15,\"acquireTime\":\"2017-12-16T20:57:50Z\",\"renewTime\":\"2017-12-16T20:57:52Z\",\"leaderTransitions\":0}",
"pv.kubernetes.io/bind-completed": "yes",
"pv.kubernetes.io/bound-by-controller": "yes",
"volume.beta.kubernetes.io/storage-provisioner": "k8s.io/minikube-hostpath"
}
},
"spec": {
"accessModes": [
"ReadWriteOnce"
],
"resources": {
"requests": {
"storage": "8Gi"
}
},
"volumeName": "pvc-c713429d-e2a3-11e7-9ca9-080027231d54",
"storageClassName": "standard"
},
"status": {
"phase": "Bound",
"accessModes": [
"ReadWriteOnce"
],
"capacity": {
"storage": "8Gi"
}
}
}
Any assistance would be appreciated.
You may be running into this issue: https://github.com/kubernetes/minikube/issues/2256
The problem is there's a bug in the hostpath volume provisioner that encounters an error when the 'subPath' field is present in the Deployment resource (event if the field has an empty value).
Here's a workaround that worked for me - unpack the postgresql chart and comment out the following line in deployment.yaml:
# subPath: {{ .Values.persistence.subPath }}
Then redeploy the modified chart. If you're reliant on the 'subPath' field, this workaround won't work for you.
Note: This issue is also present on Kubernetes on Docker-for-Mac (which is where I've encountered it).
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With