Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

My worker node status is Ready,SchedulingDisabled

Tags:

kubernetes

I have kubernetes cluster and every thing work fine. after some times I drain my worker node and reset it and join it again to master but

#kubectl get nodes NAME      STATUS                     ROLES    AGE    VERSION ubuntu    Ready                      master   159m   v1.14.0 ubuntu1   Ready,SchedulingDisabled   <none>   125m   v1.14.0 ubuntu2   Ready,SchedulingDisabled   <none>   96m    v1.14.0 

what should i do?

like image 406
yasin lachini Avatar asked Mar 30 '19 15:03

yasin lachini


People also ask

Why is worker node not ready?

Common reasons for a Kubernetes node not ready error include lack of resources on the node, a problem with the kubelet (the agent enabling the Kubernetes control plane to access and control the node), or an error related to kube-proxy (the networking agent on the node).

How do I know if node pods are running on worker?

You can also use kubectl describe nodes nodename and check Non-terminated Pods section to view which pods are currently running in that particular node.

What are the status of running node in Kubernetes?

The kubectl command is used to show the detailed status of the Kubernetes pods deployed to run the PowerAI Vision application. When the application is running correctly, each of the pods should have: A value of 1/1 in the READY column. A value of Running in the STATUS column.


2 Answers

To prevent a node from scheduling new pods use:

kubectl cordon <node-name> 

Which will cause the node to be in the status: Ready,SchedulingDisabled.

To tell is to resume scheduling use:

kubectl uncordon <node-name> 

More information about draining a node can be found here. And manual node administration here

like image 75
Amityo Avatar answered Oct 19 '22 03:10

Amityo


I fixed it using:

kubectl uncordon <node-name> 
like image 24
yasin lachini Avatar answered Oct 19 '22 01:10

yasin lachini