I am trying to deploy my Kubernetes Helm chart for sample Kubernetes cluster deployment. I created a sample Helm chart and added Docker image reference and deployed the Helm chart using terminal command helm install <my-chartname>
. And micro service is accessing successfully without any problem.
After that I created a Jenkins pipeline job and added only one stage that containing the step for deployment. I added like the following way,
pipeline
{
agent any
stages
{
stage ('helmchartinstall')
{
steps
{
sh 'helm install spacestudychart'
}
}
}
}
And I am getting the error like following ,
[Pipeline] { (helmchartinstall)
[Pipeline] sh
+ helm install spacestudychart
Error: the server could not find the requested resource (get pods)
The same command is working when I am running through terminal.
Update
To upgrade tiller to latest version, I run the helm init --upgrade
command on terminal. But the error remains still.
Output of "helm version" is like the following,
Client: &version.Version{SemVer:"v2.14.0", GitCommit:"05811b84a3f93603dd6c2fcfe57944dfa7ab7fd0", GitTreeState:"clean"}
Server: &version.Version{SemVer:"v2.14.0", GitCommit:"05811b84a3f93603dd6c2fcfe57944dfa7ab7fd0", GitTreeState:"clean"}
Output of "kubectl version --short" is like the following,
Client Version: v1.14.1
Server Version: v1.13.5
When I run command "kubectl --v=5 get pods; helm install spacestudychart" , I am getting the console output like the following,
+ kubectl --v=5 get pods
I0604 07:44:46.035459 2620 cached_discovery.go:121] skipped caching discovery info due to yaml: line 10: mapping values are not allowed in this context
I0604 07:44:46.152770 2620 cached_discovery.go:121] skipped caching discovery info due to yaml: line 10: mapping values are not allowed in this context
I0604 07:44:46.152819 2620 shortcut.go:89] Error loading discovery information: yaml: line 10: mapping values are not allowed in this context
I0604 07:44:46.283598 2620 cached_discovery.go:121] skipped caching discovery info due to yaml: line 10: mapping values are not allowed in this context
I0604 07:44:46.374088 2620 cached_discovery.go:121] skipped caching discovery info due to yaml: line 10: mapping values are not allowed in this context
I0604 07:44:46.467938 2620 cached_discovery.go:121] skipped caching discovery info due to yaml: line 10: mapping values are not allowed in this context
F0604 07:44:46.468122 2620 helpers.go:114] error: yaml: line 10: mapping values are not allowed in this context
[Pipeline] }
[Pipeline] // stage
[Pipeline] }
[Pipeline] // node
[Pipeline] End of Pipeline
ERROR: script returned exit code 255
Finished: FAILURE
Do I need to upgrade the kubectl version? What is the exact problem when running with Jenkins?
This is 100% working I have this problem before.
At first builts jenkins user next
copy config
to /home/jenkins/.kube/
cp $HOME/.kube/config /home/jenkins/.kube/
or
cp ~/.kube/config /home/jenkins/.kube/
And after that use
chmod 777 /home/jenkins/.kube/config
your kubernetes command need your kubernetes config file . it is like key or password for your kubernetes cluster so you should give the kubernetes config to your jenkins and after that it can run kubernetes commands
This is very good tutorial that help me to solve it .
tutorial
UPDATE1 you should have jenkins user for adding jenkins user you should add jenkins user to your ubuntu or centos or ..
adduser jenkins
This is good link to adduser Adding user
UPDATE 2
You should install kubectl in your server that you use it as jenkins
so that kubectl command can work and after that please copy the config in ~/.kube/config
in your kubernetes
cluster to your jenkins
server that previously you installed kubectl
on it.
As per kubectl
version skew policy:
kubectl is supported within one minor version (older or newer) of kube-apiserver.
So there is no problem to use v1.14 client with v1.13 server version.
The error that you described usually happens when a previous release already exists with the same name. You can check this with helm ls --all
. If it is the case, you should use helm upgrade
instead.
There is a great chance that the existing release is in a FAILED
state. If so, even helm upgrade
may fail. You can delete the release with helm delete spacestudychart --purge
, and try to install it again with helm install
.
The helm tiller stores release info as ConfigMaps, so another cause of the problem may be invalid data for a "broken" release. If you have this problem your scenario should look like this:
$ helm ls --all
$ kubectl get cm --all-namespaces -l OWNER=TILLER
NAMESPACE NAME DATA AGE
kube-system spacestudychart.v1 1 22h
In that case, delete the ConfigMap and try to install the release again:
$ kubectl delete cm spacestudychart.v1 -n kube-system
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With