Kubernetes top (kubectl top) command shows different memory usage than Linux top command ran inside pod.
I’ve created k8s deployment where YAML contains these memory limits:
resources:
limits:
cpu: "1"
memory: 2500Mi
requests:
cpu: 200m
memory: 2Gi
Following commands have output as shown:
bash4.4$ kubectl top pod PODNAME
NAME CPU(cores) MEMORY(bytes)
openam-d975d46ff-rnp6h 2m 1205Mi
Run linux top command:
Kubectl exec -it PODNAME top
Mem: 12507456K used, 4377612K free, 157524K shrd,
187812K buff, 3487744K cached
Note ‘free -g’ also shows 11Gb used.
Issue is this contradicts "kubectl top" which shows only 1205 mb used.
Command kubectl top
shows metrics for a given pod. That information is based on reports from cAdvisor, which collects real pods resource usage.
If you run top
inside the pod, it will be like you run it on the host system because the pod is using kernel of the host system.
Unix top
uses proc
virtual filesystem and reads /proc/meminfo
file to get an actual information about current memory status. Containers inside pods partially share /proc
with the host system include path about a memory and CPU information.
More information you can find in these documents: kubectl-top-pod man page, Memory inside Linux containers
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With