I have had jhub released in my cluster successfully. I then changed the config to pull another docker image as stated in the documentation.
This time, while running the same old command:
# Suggested values: advanced users of Kubernetes and Helm should feel
# free to use different values.
RELEASE=jhub
NAMESPACE=jhub
helm upgrade --install $RELEASE jupyterhub/jupyterhub \
--namespace $NAMESPACE \
--version=0.8.2 \
--values jupyter-hub-config.yaml
where the jupyter-hub-config.yaml
file is:
proxy:
secretToken: "<a secret token>"
singleuser:
image:
# Get the latest image tag at:
# https://hub.docker.com/r/jupyter/datascience-notebook/tags/
# Inspect the Dockerfile at:
# https://github.com/jupyter/docker-stacks/tree/master/datascience-notebook/Dockerfile
name: jupyter/datascience-notebook
tag: 177037d09156
I get the following problem:
UPGRADE FAILED
ROLLING BACK
Error: "jhub" has no deployed releases
Error: UPGRADE FAILED: "jhub" has no deployed releases
I then deleted the namespace via kubectl delete ns/jhub
and the release via helm delete --purge jhub
. Again tried this command in vain, again the same error.
I read few GH issues and found that either the YAML file was invalid or that the --force
flag worked. However, in my case, none of these two are valid.
I expect to make this release and also learn how to edit the current releases.
Note: As you would find in the aforementioned documentation, there is a pvc created.
I had the same issue when I was trying to update my config.yaml
file in GKE. Actually what worked for me is to redo these steps:
run curl https://raw.githubusercontent.com/kubernetes/helm/master/scripts/get | bash
helm init --service-account tiller --history-max 100 --wait
[OPTIONAL] helm version
to verify that you have a similar output as the documentation
Add repo
helm repo add jupyterhub https://jupyterhub.github.io/helm-chart/
helm repo update
RELEASE=jhub
NAMESPACE=jhub
helm upgrade $RELEASE jupyterhub/jupyterhub \
--namespace $NAMESPACE \
--version=0.9.0 \
--values config.yaml
After changes in kubeconfig the next solution worked for me
helm init --tiller-namespace=<ns> --upgrade
Works with kubectl 1.10.0 and helm 2.3.0. I guess this upgrades tiller to compatible helm version.
Don't forget to set KUBECONFIG variable before use this comman - this step itself may solve your issue if you didn't do this after changing your kubeconfig.
export KUBECONFIG=<*.kubeconfig>
In my case in the config cluster.server field has been changed, but context.name and current-context fields I left the same as in the previous config, not sure if it matters. And I faced the same issue on the firs try to deploy new release with helm, but after first successful deploy it's enough to change KUBECONFIG variable. I hope it helps.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With