When creating a cluster on GKE its possible to create Custom Instance Types. When adding 8GB
of memory to an n1-standard-1
Kubernetes only shows memory allocatable of 6.37GB
. Why is this?
The requested memory includes all the pods in kube-system
namespace so where is this extra memory going?
Memory allocation is a process by which computer programs and services are assigned with physical or virtual memory space. Memory allocation is the process of reserving a partial or complete portion of computer memory for the execution of programs and processes.
In short Kubernetes does the orchestration, the rest are services that would run on top of Kubernetes. GKE brings you all these components out-of-the-box, and you don't have to maintain them. They're setup for you, and they're more 'integrated' with the Google portal.
Each node in your cluster must have at least 300 MiB of memory.
If an application has a memory leak or tries to use more memory than a set limit amount, Kubernetes will terminate it with an “OOMKilled—Container limit reached” event and Exit Code 137. When you see a message like this, you have two choices: increase the limit for the pod or start debugging.
Quotinig from documentation:
Node allocatable resources
Note that some of a node's resources are required to run the Kubernetes Engine and Kubernetes resources necessary to make that node function as part of your cluster. As such, you may notice a disparity between your node's total resources (as specified in the machine type documentation) and the node's allocatable resources in Kubernetes Engine
Note: As larger machine types tend to run more containers (and by extension, Kubernetes pods), the amount of resources that Kubernetes Engine reserves for cluster processes scales upward for larger machines.
Caution: In Kubernetes Engine node versions prior to 1.7.6, reserved resources were not counted against a node's total allocatable resources. If your nodes have recently upgraded to version 1.7.6, they might appear to have fewer resources available, as Kubernetes Engine now displays allocatable resources. This can potentially lead to your cluster's nodes appearing overcommitted, and you might want to resize your cluster as a result.
For example performing some tests you can doublecheck:
Machine type Memory(GB) Allocatable(GB) CPU(cores) Allocatable(cores)
g1-small 1.7 1.2 0.5 0.47
n1-standard-1 (default) 3.75 2.7 1 0.94
n1-standard-2 7.5 5.7 2 1.93
n1-standard-4 15 12 4 3.92
n1-standard-8 30 26.6 8 7.91
n1-standard-16 60 54.7 16 15.89
Note: The values listed for allocatable resources do not account for the resources used by kube-system pods, the amount of which varies with each Kubernetes release. These system pods generally occupy an additional 400m CPU and 400mi memory on each node (values are approximate). It is recommended that you directly inspect your cluster if you require an exact accounting of usable resources on each node.
There is also the official explanation from the Kubernetes Documentation regarding why this resources are used:
kube-reserved is meant to capture resource reservation for kubernetes system daemons like the kubelet, container runtime, node problem detector, etc. It is not meant to reserve resources for system daemons that are run as pods. kube-reserved is typically a function of pod density on the nodes. This performance dashboard exposes cpu and memory usage profiles of kubelet and docker engine at multiple levels of pod density. This blog post explains how the dashboard can be interpreted to come up with a suitable kube-reserved reservation.
I would suggest you to go thorugh this page if you are interested to learn more.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With