I have a kubernetes cluster with 16Gb RAM on each node
And a typical dotnet core webapi application
I tried to configure limits like here:
apiVersion: v1
kind: LimitRange
metadata:
name: mem-limit-range
spec:
limits:
- default:
memory: 512Mi
defaultRequest:
memory: 256Mi
type: Container
But my app believe that can use 16Gb
Because cat /proc/meminfo | head -n 1
returns MemTotal: 16635172 kB
(or may be something from cgroups, I'm not sure)
So.. may be limit does not work?
No! K8s successfully kills my pod when it's reaches memory limit
.net core have interesting mode of GC, more details here. And it is a good mode, but looks like it's doesn't work solution for k8s, because application gets wrong info about memory. And unlimited pods could get all host memory. But with limits - they will die.
Now I see two ways:
How to limit memory size for .net core application in pod of kubernetes?
How to rightly set limits of memory for pods in kubernetes?
Both containers are defined with a request for 0.25 CPU and 64MiB (226 bytes) of memory. Each container has a limit of 0.5 CPU and 128MiB of memory. You can say the Pod has a request of 0.5 CPU and 128 MiB of memory, and a limit of 1 CPU and 256MiB of memory.
If you think that your app requires at least 256MB of memory to operate, this is the request value. The application can use more than 256MB, but Kubernetes guarantees a minimum of 256MB to the container. On the other hand, limits define the max amount of resources that the container can consume.
Requests and limits are the mechanisms Kubernetes uses to control resources such as CPU and memory. Requests are what the container is guaranteed to get. If a container requests a resource, Kubernetes will only schedule it on a node that can give it that resource.
You should switch to Workstation GC for optimizing to lower memory usage. The readiness probe is not meant for checking memory
In order to properly configure the resource limits you should test your application on a single pod under heavy loads and monitor(e.g. Prometheus & Grafana) the usage. For a more in-depth details see this blog post. If you haven't deployed a monitor stack you can at least use kubectl top pods
.
If you have found out the breaking points of a single pod you can add the limits to the specific pod like this example below (see Kubernetes Documentation for more examples and details)
apiVersion: v1
kind: Pod
metadata:
name: exmple-pod
spec:
containers:
- name: net-core-app
image: net-code-image
resources:
requests:
memory: 64Mi
cpu: 250m
limits:
memory: 128Mi
cpu: 500m
The readiness probe is actually meant to be used to tell when a Pod is ready in first place. I guess you thought of the liveness probe but that wouldn't be adequate usage because Kubernetes will kill the Pod when it's exceeding it's resource limit and reschedule.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With