I have kubernetes cluster and every thing work fine. after some times I drain my worker node and reset it and join it again to master but
#kubectl get nodes NAME STATUS ROLES AGE VERSION ubuntu Ready master 159m v1.14.0 ubuntu1 Ready,SchedulingDisabled <none> 125m v1.14.0 ubuntu2 Ready,SchedulingDisabled <none> 96m v1.14.0
what should i do?
Common reasons for a Kubernetes node not ready error include lack of resources on the node, a problem with the kubelet (the agent enabling the Kubernetes control plane to access and control the node), or an error related to kube-proxy (the networking agent on the node).
You can also use kubectl describe nodes nodename and check Non-terminated Pods section to view which pods are currently running in that particular node.
The kubectl command is used to show the detailed status of the Kubernetes pods deployed to run the PowerAI Vision application. When the application is running correctly, each of the pods should have: A value of 1/1 in the READY column. A value of Running in the STATUS column.
To prevent a node from scheduling new pods use:
kubectl cordon <node-name>
Which will cause the node to be in the status: Ready,SchedulingDisabled
.
To tell is to resume scheduling use:
kubectl uncordon <node-name>
More information about draining a node can be found here. And manual node administration here
I fixed it using:
kubectl uncordon <node-name>
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With