I am learning Kubernetes, and and faced a conceptual question, what is the benefit of new taint model over the simple node selector.
Documentation talks about a usecase where a group of devs might have exclusive right for a set of pods by a taint like dedicated=groupA:NoSchedule
. But I thought we can do the same thing by a simple nodeSelector.
To be more specific, what is the role of the effect on this taint. Why not simply a label like the rest of the Kubernetes.
Node affinity is a property of Pods that attracts them to a set of nodes (either as a preference or a hard requirement). Taints are the opposite -- they allow a node to repel a set of pods.
nodeSelector only selects nodes with all the specified labels. Affinity/anti-affinity gives you more control over the selection logic. You can indicate that a rule is soft or preferred, so that the scheduler still schedules the Pod even if it can't find a matching node.
Remove a taint from a node You can use kubectl taint to remove taints. You can remove taints by key, key-value, or key-effect. Note: Starting in GKE version 1.22, cluster autoscaler combines existing node and node pool information to represent the whole node pool.
Understanding taints and tolerations. A taint allows a node to refuse a pod to be scheduled unless that pod has a matching toleration. You apply taints to a node through the Node specification ( NodeSpec ) and apply tolerations to a pod through the Pod specification ( PodSpec ).
A node selector affects a single pod template, asking the scheduler to place it on a set of nodes. A NoSchedule taint affects all pods asking the scheduler to block all pods from being scheduled there.
A node selector is useful when the pod needs something from the node. For example, requesting a node that has a GPU. A node taint is useful when the node needs to be reserved for special workloads. For example, a node that should only be running pods that will use the GPU (so the GPU node isn't filled with pods that aren't using it).
Sometimes they are useful together as in the example above, too. You want the node to only have pods that use the GPU, and you want the pod that needs a GPU to be scheduled to a GPU node. In that case you may want to taint the node with dedicated=gpu:NoSchedule
and add both a taint toleration and node selector to the pod template.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With