Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Finding out disk space of Kubernetes node

Tags:

This is a total noob Kubernetes question. I have searched for this, but can't seem to find the exact answer. But that may just come down to not having a total understanding of Kubernetes. I have some pods deployed across three nodes, and my questions are simple.

  1. How do I check the total disk space on a node?
  2. How do I see how much of that space each pod is taking up?
like image 650
chuckw87 Avatar asked Jan 11 '19 15:01

chuckw87


People also ask

How do I check disk space for a node in Kubernetes?

Click on nodes and select the affected node, under the monitoring tab, click on Data Bytes. d. This will load the metrics chart that will display the average Disk Space utilization on the node.

How much disk space does a Kubernetes pod have?

Each container has a limit of 4GiB of local ephemeral storage. Therefore, the Pod has a request of 4GiB of local ephemeral storage, and a limit of 8GiB of local ephemeral storage.

How do I check disk pressure Kubernetes?

Troubleshooting Node Disk Pressure To troubleshoot the issue of node disk pressure, you need to figure out what files are taking up the most space. Since Kubernetes is running on Linux, this is easily done by running the du command. You can either manually SSH into each Kubernetes node, or use a DaemonSet.


2 Answers

For calculating total disk space you can use

 kubectl describe nodes

from there you can grep ephemeral-storage which is the virtual disk size This partition is also shared and consumed by Pods via emptyDir volumes, container logs, image layers and container writable layers

If you are using Prometheus you can calculate with this formula

sum(node_filesystem_size_bytes)
like image 113
UDIT JOSHI Avatar answered Sep 21 '22 11:09

UDIT JOSHI


I'm assuming you're using AKS as that's what the question is tagged with.

The worker nodes are just standard VMs with a whole load of scripts to bootstrap the Kubernetes cluster. Disk space is very important as every image layer you download will be cached on the server and by default the hard drive space of these servers can be very small (30GB IIRC) unless tweaked at creation. The partitioning schema is also not particularly tuned for container delivery.

You can use OMS and the container monitoring solutions in Azure to get a great insight into your cluster health. https://docs.microsoft.com/en-us/azure/azure-monitor/insights/containers or as mentioned above - you can use prometheus / Grafana or just ssh in and df -h to see what's going on (although I wouldn't advocate ssh access nodes).

The disk space on the nodes is very different to PVs mounted by the containers.

With regards to the max number of pods per node - I think the default is 30 unless you built the cluster specifically with a higher number.

like image 31
Ben Avatar answered Sep 22 '22 11:09

Ben