Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Kubernetes (minikube) pod OOMKilled with apparently plenty of memory left in node

Tags:

I'm using minikube, starting it with

minikube start --memory 8192 

For 8Gb of RAM for the node. I'm allocating pods with the resource constraints

    resources:       limits:         memory: 256Mi       requests:         memory: 256Mi 

So 256Mb of RAM for each node which would give me, I would assume, 32 pods until 8Gb memory limit has been reached but the problem is that whenever I reach the 8th pod to be deployed, the 9th will never run because it's constantly OOMKilled.

For context, each pod is a Java application with a frolvlad/alpine-oraclejdk8:slim Docker container ran with -Xmx512m -Xms128m (even if JVM was indeed using the full 512Mb instead of 256Mb I would still be far from the 16 pod limit to hit the 8Gb cap).

What am I missing here? Why are pods being OOMKilled with apparently so much free allocatable memory left?

Thanks in advance

like image 925
DMB3 Avatar asked Jul 23 '17 21:07

DMB3


People also ask

How do you resolve memory issue in Kubernetes?

If an application has a memory leak or tries to use more memory than a set limit amount, Kubernetes will terminate it with an “OOMKilled—Container limit reached” event and Exit Code 137. When you see a message like this, you have two choices: increase the limit for the pod or start debugging.

What happens when pod exceeds memory limit?

Exceed a Container's memory limit If the Container continues to consume memory beyond its limit, the Container is terminated. If a terminated Container can be restarted, the kubelet restarts it, as with any other type of runtime failure.

Why is Kubernetes killing my pod?

OOM kill due to container limit reached This is by far the most simple memory error you can have in a pod. You set a memory limit, one container tries to allocate more memory than that allowed,and it gets an error. This usually ends up with a container dying, one pod unhealthy and Kubernetes restarting that pod.


1 Answers

You must understand the way requests and limits work.

Requests are the requirements for the amount of allocatable resources required on the node for a pod to get scheduled on it. These will not cause OOMs, they will cause pod not to get scheduled.

Limits, on the other side, are hard limits for given pod. The pod will be capped at this level. So, even if you have 16GB RAM free, but have a 256MiB limit on it, as soon as your pod reaches this level, it will experience an OOM kill.

If you want, you can define only requests. Then, your pods will be able to grow to full node capacity, without being capped.

https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/

like image 70
Radek 'Goblin' Pieczonka Avatar answered Sep 18 '22 06:09

Radek 'Goblin' Pieczonka