This is a total noob Kubernetes question. I have searched for this, but can't seem to find the exact answer. But that may just come down to not having a total understanding of Kubernetes. I have some pods deployed across three nodes, and my questions are simple.
Click on nodes and select the affected node, under the monitoring tab, click on Data Bytes. d. This will load the metrics chart that will display the average Disk Space utilization on the node.
Each container has a limit of 4GiB of local ephemeral storage. Therefore, the Pod has a request of 4GiB of local ephemeral storage, and a limit of 8GiB of local ephemeral storage.
Troubleshooting Node Disk Pressure To troubleshoot the issue of node disk pressure, you need to figure out what files are taking up the most space. Since Kubernetes is running on Linux, this is easily done by running the du command. You can either manually SSH into each Kubernetes node, or use a DaemonSet.
For calculating total disk space you can use
kubectl describe nodes
from there you can grep ephemeral-storage which is the virtual disk size This partition is also shared and consumed by Pods via emptyDir volumes, container logs, image layers and container writable layers
If you are using Prometheus you can calculate with this formula
sum(node_filesystem_size_bytes)
I'm assuming you're using AKS as that's what the question is tagged with.
The worker nodes are just standard VMs with a whole load of scripts to bootstrap the Kubernetes cluster. Disk space is very important as every image layer you download will be cached on the server and by default the hard drive space of these servers can be very small (30GB IIRC) unless tweaked at creation. The partitioning schema is also not particularly tuned for container delivery.
You can use OMS and the container monitoring solutions in Azure to get a great insight into your cluster health. https://docs.microsoft.com/en-us/azure/azure-monitor/insights/containers or as mentioned above - you can use prometheus / Grafana or just ssh in and df -h
to see what's going on (although I wouldn't advocate ssh access nodes).
The disk space on the nodes is very different to PVs mounted by the containers.
With regards to the max number of pods per node - I think the default is 30 unless you built the cluster specifically with a higher number.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With