I'm running the kuberenets cluster on bare metal servers and my cluster nodes keep added and removed regularly. But when a node is removed, kubernetes does not remove it automatically from nodes list and kubectl get nodes keep showing NotReady nodes. Is there any automated way to achieve this? I want similar behavior for nodes as kubernetes does for pods.
To remove a node follow the below steps
Run on Master
# kubectl cordon <node-name>
# kubectl drain <node-name> --force --ignore-daemonsets --delete-emptydir-data
# kubectl delete node <node-name>
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With