After creating a simple hello world deployment, my pod status shows as "PENDING". When I run kubectl describe pod
on the pod, I get the following:
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedScheduling 14s (x6 over 29s) default-scheduler 0/1 nodes are available: 1 NodeUnderDiskPressure.
If I check on my node health, I get:
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
---- ------ ----------------- ------------------ ------ -------
OutOfDisk False Fri, 27 Jul 2018 15:17:27 -0700 Fri, 27 Jul 2018 14:13:33 -0700 KubeletHasSufficientDisk kubelet has sufficient disk space available
MemoryPressure False Fri, 27 Jul 2018 15:17:27 -0700 Fri, 27 Jul 2018 14:13:33 -0700 KubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure True Fri, 27 Jul 2018 15:17:27 -0700 Fri, 27 Jul 2018 14:13:43 -0700 KubeletHasDiskPressure kubelet has disk pressure
Ready True Fri, 27 Jul 2018 15:17:27 -0700 Fri, 27 Jul 2018 14:13:43 -0700 KubeletReady kubelet is posting ready status. AppArmor enabled
So it seems the issue is that "kubelet has disk pressure" but I can't really figure out what that means. I can't SSH into minikube and check on its disk space because I'm using VMWare Workstation with --vm-driver=none
.
In this case, it's as simple as deleting the unnecessary files. Depending on how your application is set up in terms of availability, you may be able to just restart the pod, leading Kubernetes to automatically clean up any files from the container.
Disk Pressure Disk pressure is a condition indicating that a node is using too much disk space or is using disk space too fast, according to the thresholds you have set in your Kubernetes configuration.
Preventing pod evictionAlways assign priority class, as Kubernetes considers both for memory and disk pressure. Avoid having pods with a BestEffort QoS class. For pods with fixed memory usage, use the Guaranteed QoS class. I don't recommend using it for every pod, as it can cause inefficient memory usage.
Apart from preemption, Kubernetes also constantly checks node resources, like disk pressure, CPU or Out of Memory (OOM). In case a resource (like CPU or memory) consumption in the node reaches a certain threshold, kubelet will start evicting Pods in order to free up the resource.
This is an old question but I just saw it and because it doesn't have an naswer yet I will write my answer.
I was facing this problem and my pods were getting evicted many times because of disk pressure and different commands such as df
or du
were not helpful.
With the help of the answer that I wrote at https://serverfault.com/a/994413/509898 I found out that the main problem is the log files of the pods and because K8s is not supporting log rotation they can grow to hundreds of Gigs.
There are different log rotation methods available but I currently I am searching for the best practice for K8s so I can't suggest any specific one, yet.
I hope this can be helpful.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With