I have installed Rancher 2 and created a kubernetes cluster of internal vm's ( no AWS / gcloud).
The cluster is up and running.
I logged into one of the nodes.
1) Installed Kubectl and executed kubectl cluster-info . It listed my cluster information correctly.
2) Installed helm
curl https://raw.githubusercontent.com/helm/helm/master/scripts/get > get_helm.sh
chmod 700 get_helm.sh
./get_helm.sh
root@lnmymachine # helm version
Client: &version.Version{SemVer:"v2.12.3", GitCommit:"eecf22f77df5f65c823aacd2dbd30ae6c65f186e", GitTreeState:"clean"}
Server: &version.Version{SemVer:"v2.12.3", GitCommit:"eecf22f77df5f65c823aacd2dbd30ae6c65f186e", GitTreeState:"clean"}
3) Configured helm referencing Rancher Helm Init
kubectl -n kube-system create serviceaccount tiller
kubectl create clusterrolebinding tiller \
--clusterrole cluster-admin \
--serviceaccount=kube-system:tiller
helm init --service-account tiller
Tried installing Jenkins via helm
root@lnmymachine # helm ls
Error: Unauthorized
root@lnmymachine # helm install --name initial stable/jenkins
Error: the server has asked for the client to provide credentials
Browsed similar issues and few of them were due to multiple clusters. I have only one cluster. kubectl gives all information correctly.
Any idea whats happening.
You must have Kubernetes installed. For the latest release of Helm, we recommend the latest stable release of Kubernetes, which in most cases is the second-latest minor release. You should also have a local configured copy of kubectl .
To remove the pods, list the pods with kubectl get pods and then delete the pods by name. To delete the Helm release, find the Helm release name with helm list and delete it with helm delete . You may also need to clean up leftover StatefulSets , since helm delete can leave them behind.
Rather than an IT admin simply listing the files to install via kubectl, a single command can install an entire application, and Helm will pull the required dependencies and apply the manifests.
It seems there is a mistake while creating the ClusterRoleBinding
:
Instead of --clusterrole cluster-admin
, you should have --clusterrole=cluster-admin
You can check if this is the case by verifying if ServiceAccount, ClustrerRoleBinding were created correctly.
kubectl describe -n kube-system sa tiller
kubectl describe clusterrolebinding tiller
Seems like they have already fixed this on Rancher Helm Init page.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With