I'm using minikube, starting it with
minikube start --memory 8192
For 8Gb of RAM for the node. I'm allocating pods with the resource constraints
resources: limits: memory: 256Mi requests: memory: 256Mi
So 256Mb of RAM for each node which would give me, I would assume, 32 pods until 8Gb memory limit has been reached but the problem is that whenever I reach the 8th pod to be deployed, the 9th will never run because it's constantly OOMKilled.
For context, each pod is a Java application with a frolvlad/alpine-oraclejdk8:slim Docker container ran with -Xmx512m -Xms128m (even if JVM was indeed using the full 512Mb instead of 256Mb I would still be far from the 16 pod limit to hit the 8Gb cap).
What am I missing here? Why are pods being OOMKilled with apparently so much free allocatable memory left?
Thanks in advance
If an application has a memory leak or tries to use more memory than a set limit amount, Kubernetes will terminate it with an “OOMKilled—Container limit reached” event and Exit Code 137. When you see a message like this, you have two choices: increase the limit for the pod or start debugging.
Exceed a Container's memory limit If the Container continues to consume memory beyond its limit, the Container is terminated. If a terminated Container can be restarted, the kubelet restarts it, as with any other type of runtime failure.
OOM kill due to container limit reached This is by far the most simple memory error you can have in a pod. You set a memory limit, one container tries to allocate more memory than that allowed,and it gets an error. This usually ends up with a container dying, one pod unhealthy and Kubernetes restarting that pod.
You must understand the way requests and limits work.
Requests are the requirements for the amount of allocatable resources required on the node for a pod to get scheduled on it. These will not cause OOMs, they will cause pod not to get scheduled.
Limits, on the other side, are hard limits for given pod. The pod will be capped at this level. So, even if you have 16GB RAM free, but have a 256MiB limit on it, as soon as your pod reaches this level, it will experience an OOM kill.
If you want, you can define only requests. Then, your pods will be able to grow to full node capacity, without being capped.
https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With