Using helm for deploying chart on my Kubernetes cluster, since one day, I can't deploy a new one or upgrading one existed.
Indeed, each time I am using helm I have an error message telling me that it is not possible to install or upgrade ressources.
If I run helm install --name foo . -f values.yaml --namespace foo-namespace
I have this output:
Error: release foo failed: the server could not find the requested resource
If I run helm upgrade --install foo . -f values.yaml --namespace foo-namespace
or helm upgrade foo . -f values.yaml --namespace foo-namespace
I have this error:
Error: UPGRADE FAILED: "foo" has no deployed releases
I don't really understand why.
This is my helm version:
Client: &version.Version{SemVer:"v2.12.3", GitCommit:"eecf22f77df5f65c823aacd2dbd30ae6c65f186e", GitTreeState:"clean"}
Server: &version.Version{SemVer:"v2.12.3", GitCommit:"eecf22f77df5f65c823aacd2dbd30ae6c65f186e", GitTreeState:"clean"}
On my kubernetes cluster I have tiller deployed with the same version, when I run kubectl describe pods tiller-deploy-84b... -n kube-system
:
Name: tiller-deploy-84b8...
Namespace: kube-system
Priority: 0
PriorityClassName: <none>
Node: k8s-worker-1/167.114.249.216
Start Time: Tue, 26 Feb 2019 10:50:21 +0100
Labels: app=helm
name=tiller
pod-template-hash=84b...
Annotations: <none>
Status: Running
IP: <IP_NUMBER>
Controlled By: ReplicaSet/tiller-deploy-84b8...
Containers:
tiller:
Container ID: docker://0302f9957d5d83db22...
Image: gcr.io/kubernetes-helm/tiller:v2.12.3
Image ID: docker-pullable://gcr.io/kubernetes-helm/tiller@sha256:cab750b402d24d...
Ports: 44134/TCP, 44135/TCP
Host Ports: 0/TCP, 0/TCP
State: Running
Started: Tue, 26 Feb 2019 10:50:28 +0100
Ready: True
Restart Count: 0
Liveness: http-get http://:44135/liveness delay=1s timeout=1s period=10s #success=1 #failure=3
Readiness: http-get http://:44135/readiness delay=1s timeout=1s period=10s #success=1 #failure=3
Environment:
TILLER_NAMESPACE: kube-system
TILLER_HISTORY_MAX: 0
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from helm-token-... (ro)
Conditions:
Type Status
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
helm-token-...:
Type: Secret (a volume populated by a Secret)
SecretName: helm-token-...
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 26m default-scheduler Successfully assigned kube-system/tiller-deploy-84b86cbc59-kxjqv to worker-1
Normal Pulling 26m kubelet, k8s-worker-1 pulling image "gcr.io/kubernetes-helm/tiller:v2.12.3"
Normal Pulled 26m kubelet, k8s-worker-1 Successfully pulled image "gcr.io/kubernetes-helm/tiller:v2.12.3"
Normal Created 26m kubelet, k8s-worker-1 Created container
Normal Started 26m kubelet, k8s-worker-1 Started container
Is someone have faced the same issue ?
Update:
This the folder structure of my actual chart named foo: structure folder of the chart:
> templates/
> deployment.yaml
> ingress.yaml
> service.yaml
> .helmignore
> Chart.yaml
> values.yaml
I have already tried to delete the chart in failure using the delete command helm del --purge foo
but the same errors occurred.
Just to be more precise, the chart foo is in fact a custom chart using my own private registry. ImagePullSecret are normally setting up.
I have run these two commands helm upgrade foo . -f values.yaml --namespace foo-namespace --force
| helm upgrade --install foo . -f values.yaml --namespace foo-namespace --force
and I still get an error:
UPGRADE FAILED
ROLLING BACK
Error: failed to create resource: the server could not find the requested resource
Error: UPGRADE FAILED: failed to create resource: the server could not find the requested resource
Notice that foo-namespace already exist. So the error don't come from the namespace name or the namespace itself. Indeed, if I run helm list
, I can see that the foo chart is in a FAILED
status.
To correct it, it's quite easy. You just need to update the last secret related to your release. In it, there is a label called status. Change its value to deployed, then reuse your "helm upgrade --install" command and it will works !
If you need to uninstall the deployed release, run the delete command on the Helm command line. The command removes all the Kubernetes components that are associated with the chart and deletes the release.
To perform a helm release upgrade using the CLI, run the following command provided: helm upgrade <release name> <chart directory> -f my-values. yaml using the configuration specified in the customized values. yaml file. After a successful upgrade, the helm will return the following message.
After helm init , you should be able to run kubectl get pods --namespace kube-system and see Tiller running. Once Tiller is installed, running helm version should show you both the client and server version. (If it shows only the client version, helm cannot yet connect to the server.
Tiller stores all releases as ConfigMaps in Tiller's namespace(kube-system
in your case). Try to find broken release and delete it's ConfigMap using commands:
$ kubectl get cm --all-namespaces -l OWNER=TILLER
NAMESPACE NAME DATA AGE
kube-system nginx-ingress.v1 1 22h
$ kubectl delete cm nginx-ingress.v1 -n kube-system
Next, delete all release objects (deployment,services,ingress, etc) manually and reinstall release using helm again.
If it didn't help, you may try to download newer release of Helm (v2.14.3 at the moment) and update/reinstall Tiller.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With