I am trying to create a deployment on a K8s cluster with one master and two worker nodes. The cluster is running on 3 AWS EC2 instances. I have been using this environment for quite sometime to play with Kubernetes. Three days back, I have started to see all the pods status to change to ContainerCreating
from Running
. Only the pods that are scheduled on master are shown as Running
. The pods running on worker nodes are shown as ContainerCreating
. When I run kubectl describe pod <podname>
, it shows in the event the following
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 34s default-scheduler Successfully assigned nginx-8586cf59-5h2dp to ip-172-31-20-57
Normal SuccessfulMountVolume 34s kubelet, ip-172-31-20-57 MountVolume.SetUp succeeded for volume "default-token-wz7rs"
Warning FailedCreatePodSandBox 4s kubelet, ip-172-31-20-57 Failed create pod sandbox.
Normal SandboxChanged 3s kubelet, ip-172-31-20-57 Pod sandbox changed, it will be killed and re-created.
This error has been bugging me now. I tried to search around online on related error but I couldn't get anything specific. I did kubeadm reset on the cluster including master and worker nodes and brought up the cluster again. The nodes status shows ready. But I run into the same problem again whenever I try to create a deployment using the below command for example:
kubectl run nginx --image=nginx --replicas=2
This can occur if you specify a limit or request on memory and use the wrong unit.
Below triggered the message:
resources:
limits:
cpu: "300m"
memory: "256m"
requests:
cpu: "50m"
memory: "64m"
The correct line would be:
resources:
limits:
cpu: "300m"
memory: "256Mi"
requests:
cpu: "50m"
memory: "64Mi"
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With