I have one kubernetes cluster with 4 nodes and one master. I am trying to run 5 nginx pod in all nodes. Currently sometimes the scheduler runs all the pods in one machine and sometimes in different machine.
What happens if my node goes down and all my pods were running in same node? We need to avoid this.
How to enforce scheduler to run pods on the nodes in round-robin fashion, so that if any node goes down then at at least one node should have NGINX pod in running mode.
Is this possible or not? If possible, how can we achieve this scenario?
The major difference is that Anti-affinity can restrict only one pod per node, whereas Pod Topology Spread Constraints can restrict N pods per nodes. For more details see KEP-895 and an official blog post.
stop kube-scheduler and kube-controller-manager by running sudo docker stop kube-scheduler kube-controller-manager. stop kube-apiserver by running sudo docker stop kube-apiserver. stop docker by running sudo service docker stop or sudo systemctl stop docker.
You can add the nodeSelector field to your Pod specification and specify the node labels you want the target node to have. Kubernetes only schedules the Pod onto nodes that have each of the labels you specify. See Assign Pods to Nodes for more information.
You can go the Minikube route, or launch a full-blown single node of a standard Kubernetes installation. Or, you can make use of Microk8s. Managed by Canonical, Microk8s is a non-elastic, rails-based single-node Kubernetes tool that is focused primarily on offline development, prototyping, and testing.
Reference: Kubernetes in Action Chapter 16. Advanced scheduling
The podAntiAfinity with requiredDuringSchedulingIgnoredDuringExecution can be used to prevent the same pod from being scheduled to the same hostname. If prefer more relaxed constraint, use preferredDuringSchedulingIgnoredDuringExecution.
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: nginx
spec:
replicas: 5
template:
metadata:
labels:
app: nginx
spec:
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution: <---- hard requirement not to schedule "nginx" pod if already one scheduled.
- topologyKey: kubernetes.io/hostname <---- Anti affinity scope is host
labelSelector:
matchLabels:
app: nginx
container:
image: nginx:latest
You can specify the max number of pods for a node in kubelet configuration so that in the scenario of node(s) down, it will prevent K8S from saturating another nodes with pods from the failed node.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With