Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Redistribute pods after adding a node in Kubernetes

Tags:

kubernetes

What should I do with pods after adding a node to the Kubernetes cluster?

I mean, ideally I want some of them to be stopped and started on the newly added node. Do I have to manually pick some for stopping and hope that they'll be scheduled for restarting on the newly added node?

I don't care about affinity, just semi-even distribution.

Maybe there's a way to always have the number of pods be equal to the number of nodes?

For the sake of having an example:

I'm using juju to provision small Kubernetes cluster on AWS. One master and two workers. This is just a playground.

My application is apache serving PHP and static files. So I have a deployment, a service of type NodePort and an ingress using nginx-ingress-controller.

I've turned off one of the worker instances and my application pods were recreated on the one that remained working.

I then started the instance back, master picked it up and started nginx ingress controller there. But when I tried deleting my application pods, they were recreated on the instance that kept running, and not on the one that was restarted.

Not sure if it's important, but I don't have any DNS setup. Just added IP of one of the instances to /etc/hosts with host value from my ingress.

like image 636
clorz Avatar asked May 18 '17 08:05

clorz


People also ask

Does Kubernetes distribute pods across nodes?

Kubernetes automatically spreads the Pods for workload resources (such as Deployment or StatefulSet) across different nodes in a cluster. This spreading helps reduce the impact of failures.

How do you move pods from one node to another Kubernetes?

First of all you cannot “move” a pod from one node to another. You can only delete it from one node and have it re-created on another. To delete use the kubectl delete command. To ensure a pod lands on a specific node using node affinity/taints and tolerations.

How do you distribute pods evenly?

In order to distribute pods evenly across all cluster worker nodes in an absolute even manner, we can use the well-known node label called kubernetes.io/hostname as a topology domain, which ensures each worker node is in its own topology domain.

How do I reschedule my running pod to another node?

Pods are only scheduled once in their lifetime. Once a Pod is scheduled (assigned) to a Node, the Pod runs on that Node until it stops or is terminated. So the answer to your question is “no”, as others had already mentioned: the pod will not be re-scheduled to any other node.


1 Answers

descheduler, a kuberenets incubator project could be helpful. Following is the introduction

As Kubernetes clusters are very dynamic and their state change over time, there may be desired to move already running pods to some other nodes for various reasons:

  • Some nodes are under or over utilized.
  • The original scheduling decision does not hold true any more, as taints or labels are added to or removed from nodes, pod/node affinity requirements are not satisfied any more.
  • Some nodes failed and their pods moved to other nodes.
  • New nodes are added to clusters.
like image 55
chestack Avatar answered Dec 24 '22 14:12

chestack