When provisioning a kubernetes cluster with kubeadmin init
it creates a cluster which keeps the kube-apiserver
, etcd
, kube-controller-manager
and kube-scheduler
processes within docker containers.
Whenever some configuration (e.g. access tokens) for the kube-apiserver
is changed, I've to restart the related server. While I could usually run systemctl restart kube-apiserver.service
on other installations, I've kill the docker container on that installation or restart the system to restart it.
So is there a better way to restart the kube-apiserver
?
There are cases where the Kubelet did stop the kube-apiserver container but did not start it again. You can force it to do so with systemctl restart kubelet. service . That should attempt to start kube-apiserver and log an error at journalctl if it failed.
Shutting down the control plane nodes stop kubelet and kube-proxy by running sudo docker stop kubelet kube-proxy. stop kube-scheduler and kube-controller-manager by running sudo docker stop kube-scheduler kube-controller-manager. stop kube-apiserver by running sudo docker stop kube-apiserver.
You can delete the kube-apiserver Pod. It's a static Pod (in case of a kubeadm installation) and will be recreated immediately.
If I recall correctly the manifest directory for that installation is /etc/kubernetes/manifest, but I will check later and edit this answer. Just doing a touch on the kube-apiserver.json will also recreate the Pod.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With