Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

How can kube-apiserver be restarted? [closed]

I restarted my system today. After that my main system and the web browser are not connected to look for a kubernetes GUI.

When I ran the command systemctl status kube-apiserver.service, it gives output as shown below:

kube-apiserver.service
  Loaded: not-found (Reason: No such file or directory)
  Active: inactive (dead)

How can api-server be restarted?

like image 761
Deepak Nayak Avatar asked Aug 03 '18 06:08

Deepak Nayak


People also ask

How do you stop a kube-Apiserver pod?

stop kube-scheduler and kube-controller-manager by running sudo docker stop kube-scheduler kube-controller-manager. stop kube-apiserver by running sudo docker stop kube-apiserver. stop docker by running sudo service docker stop or sudo systemctl stop docker. shutdown the system sudo shutdown now.

How do I know if kube-Apiserver is running?

Usually the apiserver is deployed as a static pod. In this case you should see it listed when you run kubectl get po -n kube-system .


2 Answers

Did you download and installed the Kubernetes Controller Binaries directly?

1 ) If so, check if the kube-apiserver.service systemd unit file exists:

cat /etc/systemd/system/kube-apiserver.service

2 ) If not, you probably installed K8S with kubeadm.
With this setup the kubeapi-server is running as a pod on the master node:

kubectl get pods -n kube-system
NAME                                       READY   STATUS    
coredns-f9fd979d6-jsn6w                    1/1     Running  ..
coredns-f9fd979d6-tv5j6                    1/1     Running  ..
etcd-master-k8s                            1/1     Running  ..
kube-apiserver-master-k8s                  1/1     Running  .. #<--- Here
kube-controller-manager-master-k8s         1/1     Running  ..
kube-proxy-5kzbc                           1/1     Running  ..
kube-scheduler-master-k8s                  1/1     Running  ..

And not as a systemd service.

So, because you can't restart pods in K8S you'll have to delete it:

kubectl delete pod/kube-apiserver-master-k8s -n kube-system

And a new pod will be created immediately.


(*) When you run kubeadm init you should see the creation of the manifests for the control plane static Pods:

.
. 
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
.
.

The corresponding yamls:

ubuntu@master-k8s:/etc/kubernetes/manifests$ ls -la
total 24
drwxr-xr-x 2 root root 4096 Oct 14 00:13 .
drwxr-xr-x 4 root root 4096 Sep 29 02:30 ..
-rw------- 1 root root 2099 Sep 29 02:30 etcd.yaml
-rw------- 1 root root 3863 Oct 14 00:13 kube-apiserver.yaml <----- Here
-rw------- 1 root root 3496 Sep 29 02:30 kube-controller-manager.yaml
-rw------- 1 root root 1384 Sep 29 02:30 kube-scheduler.yaml

And the kube-apiserver spec:

apiVersion: v1
kind: Pod
metadata:
  annotations:
    kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 10.100.102.5:6443
  creationTimestamp: null
  labels:
    component: kube-apiserver
    tier: control-plane
  name: kube-apiserver
  namespace: kube-system
spec:
  containers:
  - command:
    - kube-apiserver
    - --advertise-address=10.100.102.5
    - --allow-privileged=true
    - --authorization-mode=Node,RBAC
    - --client-ca-file=/etc/kubernetes/pki/ca.crt
    - --enable-admission-plugins=NodeRestriction
    - --enable-bootstrap-token-auth=true
    .
    .
    .
like image 183
RtmY Avatar answered Sep 20 '22 01:09

RtmY


move the kube-apiserver manifest file from /etc/kubernetes/manifests folder to a temporary folder. The advantage of this method is - you can stop the kube-apiserver as long as the file is removed from manifest folder.

vagrant@master01:~$ ll /etc/kubernetes/manifests/
total 16
-rw------- 1 root root 3315 May 12 23:24 kube-controller-manager.yaml
-rw------- 1 root root 1384 May 12 23:24 kube-scheduler.yaml
-rw------- 1 root root 2157 May 12 23:24 etcd.yaml
-rw------- 1 root root 3792 May 20 00:08 kube-apiserver.yaml
vagrant@master01:~$ sudo mv /etc/kubernetes/manifests/kube-apiserver.yaml /tmp/
vagrant@master01:~$ 
vagrant@master01:~$ ll /etc/kubernetes/manifests/
total 12
-rw------- 1 root root 3315 May 12 23:24 kube-controller-manager.yaml
-rw------- 1 root root 1384 May 12 23:24 kube-scheduler.yaml
-rw------- 1 root root 2157 May 12 23:24 etcd.yaml

API Server is down now-

vagrant@master01:~$ k get pods -n kube-system
The connection to the server 10.0.0.2:6443 was refused - did you specify the right host or port?
vagrant@master01:~$ 

vagrant@master01:~$ sudo mv /tmp/kube-apiserver.yaml /etc/kubernetes/manifests/
vagrant@master01:~$ 
vagrant@master01:~$ ll /etc/kubernetes/manifests/
total 16
-rw------- 1 root root 3315 May 12 23:24 kube-controller-manager.yaml
-rw------- 1 root root 1384 May 12 23:24 kube-scheduler.yaml
-rw------- 1 root root 2157 May 12 23:24 etcd.yaml
-rw------- 1 root root 3792 May 20 00:08 kube-apiserver.yaml

API Server is up now

vagrant@master01:~$ k get pods -n kube-system
NAME                               READY   STATUS    RESTARTS   AGE
coredns-558bd4d5db-269lt           1/1     Running   5          8d
coredns-558bd4d5db-967d8           1/1     Running   5          8d
etcd-master01                      1/1     Running   6          8d
kube-apiserver-master01            0/1     Running   2          24h
kube-controller-manager-master01   1/1     Running   8          8d
kube-proxy-q8mkb                   1/1     Running   5          8d
kube-proxy-x6trg                   1/1     Running   6          8d
kube-proxy-xxph9                   1/1     Running   8          8d
kube-scheduler-master01            1/1     Running   8          8d
weave-net-rh2gb                    2/2     Running   18         8d
weave-net-s2cg9                    2/2     Running   14         8d
weave-net-wksk2                    2/2     Running   11         8d
vagrant@master01:~$ 
like image 33
Amit Raj Avatar answered Sep 24 '22 01:09

Amit Raj