I have a kubernetes cluster with a few nodes set up. I want to make sure that pods are distributed efficiently on the nodes.
I'll explain:
Let's assume that I have two nodes:
Node 1 - 2gb ram
Node 2 - 2gb ram
And I have these pods:
Pod 1 - 1gb ram on Node 1
Pod 2 - 100mb ram on Node 1
Pod 3 - 1gb ram on Node 2
Pod 4 - 100mb ram on Node 2
Ok now the problem: let's say I want to add a pod with 1gb ram to the cluster. Currently there's no room in any node so kubernetes won't do it (unless I add another node). I wonder if there's a way that kubernetes will see that it can move Pod 3 to node 1 to make room for the new pod?
Help
5/ In other words, Kubernetes does not rebalance your pods automatically.
With Kubernetes you don't need to modify your application to use an unfamiliar service discovery mechanism. Kubernetes gives Pods their own IP addresses and a single DNS name for a set of Pods, and can load-balance across them.
A Pod can communicate with another Pod by directly addressing its IP address, but the recommended way is to use Services. A Service is a set of Pods, which can be reached by a single, fixed DNS name or IP address. In reality, most applications on Kubernetes use Services as a way to communicate with each other.
topologyKey is the key of node labels. If two Nodes are labelled with this key and have identical values for that label, the scheduler treats both Nodes as being in the same topology. The scheduler tries to place a balanced number of Pods into each topology domain.
The Kubernetes descheduler incubator project will eventually be integrated into Kubernetes to accommodate rebalancing. This could be prompted by under/overutilization of node resources as your case suggests or for other reasons, such as changes in node taints or affinities.
For your case, you could run the descheduler with the LowNodeUtilization
strategy and carefully configured thresholds to have some pods evicted and added back to the pod queue after the new 1gb pod.
Another method could use pod priority classes to cause a lower priority pod to be evicted and make room for the new incoming 1gb job. Pod priorities are enabled by default starting in version 1.11. Priorities aren't intended to be a rebalancing mechanism, but I mention it because it is a viable solution for ensuring a higher priority incoming pod can be scheduled. Priorities deprecate the old rescheduler that will be removed in 1.12.
Edit - include sample policy
The policy I used to test this is below:
apiVersion: "descheduler/v1alpha1"
kind: "DeschedulerPolicy"
strategies:
"LowNodeUtilization":
enabled: true
params:
nodeResourceUtilizationThresholds:
thresholds:
"memory": 50
targetThresholds:
"memory": 51
"pods": 0
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With