Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Kubernetes: understanding memory usage for "kubectl top node"

How do I interpret the memory usage returned by "kubectl top node". E.g. if it returns:

     NAME                   CPU(cores)   CPU%      MEMORY(bytes)   MEMORY%     ip-XXX.ec2.internal    222m         11%       3237Mi          41%     ip-YYY.ec2.internal    91m          9%        2217Mi          60% 

By comparison, if I look in the Kubernetes dashboard for the same node, I get: Memory Requests: 410M / 7.799 Gi


kubernetes dashboard

[1]


How do I reconcile the difference?

like image 510
Kirill Kireyev Avatar asked Jul 11 '17 19:07

Kirill Kireyev


People also ask

How much memory of a node does Kubernetes require as overhead?

Each node in your cluster must have at least 300 MiB of memory. A few of the steps on this page require you to run the metrics-server service in your cluster. If you have the metrics-server running, you can skip those steps. If the resource metrics API is available, the output includes a reference to metrics.k8s.io .

How much RAM does Kubernetes use?

Kubernetes uses memory requests to determine on which node to schedule the pod. For example, on a node with 8 GB free RAM, Kubernetes will schedule 10 pods with 800 MB for memory requests, five pods with 1600 MB for requests, or one pod with 8 GB for request, etc.


1 Answers

kubectl top node is reflecting the actual usage to the VM(nodes), and k8s dashboard is showing the percentage of limit/request you configured.

E.g. Your EC2 instance has 8G memory and you actually use 3237MB so it's 41%. In k8s, you only request 410MB(5.13%), and have a limit of 470MB memory. This doesn't mean you only consume 5.13% memory, but the amount configured.

  Namespace         Name                                CPU Requests    CPU Limits  Memory Requests Memory Limits   ---------         ----                                ------------    ----------  --------------- -------------   default           kube-lego                           20m (2%)    0 (0%)      0 (0%)      0 (0%)   default           mongo-0                             100m (10%)  0 (0%)      0 (0%)      0 (0%)   default           web                                 100m (10%)  0 (0%)      0 (0%)      0 (0%)   kube-system       event-exporter-                     0 (0%)      0 (0%)      0 (0%)      0 (0%)   kube-system       fluentd-gcp-v2.0-z6xh9              100m (10%)  0 (0%)      200Mi (11%) 300Mi (17%)   kube-system       heapster-v1.4.0-3405140848-k6cm9    138m (13%)  138m (13%)  301456Ki (17%)  301456Ki (17%)   kube-system       kube-dns-3809445927-hn5xk           260m (26%)  0 (0%)      110Mi (6%)  170Mi (9%)   kube-system       kube-dns-autoscaler-38801           20m (2%)    0 (0%)      10Mi (0%)   0 (0%)   kube-system       kube-proxy-gke-staging-default-     100m (10%)  0 (0%)      0 (0%)      0 (0%)   kube-system       kubernetes-dashboard-1962351        100m (10%)  100m (10%)  100Mi (5%)  300Mi (17%)   kube-system       l7-default-backend-295440977        10m (1%)    10m (1%)    20Mi (1%)   20Mi (1%) 

Here you see many pods with 0 request/limit means unlimited, which didn't count in k8s dashboard but definitely consume memory.

Sum up the memory request/limit you'll find they match k8s dashboard. enter image description here

like image 150
Ken Chen Avatar answered Oct 06 '22 07:10

Ken Chen