I'm trying to understand how/why the schedule behaves with certain circumstances. Can someone explain what the scheduler would do (and why) in these scenarios?
Assume I have a 10GB memory box
I have a container with memory request set to 1G. I run 10 replicas of it, I expect to see all 10 on the same box (ignore for this case, any kube-system style pods)
Now assume I also add memory limit set to 2G. What happens? To me, this says to the scheduler "this pod is asking for 1G but can grow to 2G" -- would the scheduler still put all 10 on the same box, knowing that it might very well have to kick half of them off? Or will it allocate 2G as that's the limit described?
Would I also be correct in assuming that if I don't declare a limit, that the pod will grow until the node runs out of memory then kills pods that have exceeded their request resource? Or would it assume some kind of default?
Requests is what needs to be provided on node exclusively to that pod for it to schedule. This is what is taken off the available resource count. Limits are, well, limits. The pod usage will get limited to that value.
So, if you have 10G node, and want to fit in req: 1G, limit: 2G
pods on it, you will be able to fit 10 of them, and they will be able to bursts to 2G memory usage if there is enough unused memory from the others (ie. you request 1G, but really use 700M, which gives you roughly 3G requested, but not used space which will be available for bursting to the 2G limit by the pods.
@Radek's explanation is of course correct. To answer your followup question, If you do declare resources and no limits, the documentation explains available scenarios: Container is able to exceed its request memory, if the Node has it available. But it is not allowed to use more than the limit. So here we have your use case -
If a Container allocates more memory than its limit, the Container becomes a candidate for termination. If the Container continues to consume memory beyond its limit, the Container is terminated. If a terminated Container can be restarted, the kubelet restarts it, as with any other type of runtime failure.
If there are no limits:
To fully grasp the topic, I think it is important to understand that the limits are set to defend from bursts, so when for some limited time your container is on it's peak you will still keep the available resources for the rest of your components and there would be no serious disaster possible.
I strongly recommend trying some use cases from the official documentation (CPU,Memory), so you can test your own scenarios and understand it better. You can do it in no time using minikube for example.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With