When I provision a Kubernetes cluster using kubeadm, I get my nodes tagged as "none". It's a known bug in Kubernetes and currently a PR is in progress.
However, I would like to know if there is an option to add a Role name manually for the node.
root@ip-172-31-14-133:~# kubectl get nodes NAME STATUS ROLES AGE VERSION ip-172-31-14-133 Ready master 19m v1.9.3 ip-172-31-6-147 Ready <none> 16m v1.9.3
Kubernetes runs your workload by placing containers into Pods to run on Nodes. A node may be a virtual or physical machine, depending on the cluster. Each node is managed by the control plane and contains the services necessary to run Pods.
This worked for me:
kubectl label node cb2.4xyz.couchbase.com node-role.kubernetes.io/worker=worker
NAME STATUS ROLES AGE VERSION cb2.4xyz.couchbase.com Ready custom,worker 35m v1.11.1 cb3.5xyz.couchbase.com Ready worker 29m v1.11.1
I could not delete/update the old label, but I can live with it.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With