Each t2.micro
node should be able to run 4 pods according to this article and the command kubectl get nodes -o yaml | grep pods
output.
But I have two nodes and I can launch only 2 pods. 3rd pod gets stuck with the following error message.
Could it be the application using too much resource and as a result its not launching more pods? If that was the case it could indicate Insufficient CPU or memory
.
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedScheduling 33s (x2 over 33s) default-scheduler 0/2 nodes are available: 2 Too many pods.
On Amazon Elastic Kubernetes Service (EKS), the maximum number of pods per node depends on the node type and ranges from 4 to 737. On Google Kubernetes Engine (GKE), the limit is 100 pods per node, regardless of the type of node.
small instances, it is 11 pods per instance. That is, you can have a maximum number of 22 pods in your cluster. 6 of these pods are system pods, so there remains a maximum of 16 workload pods.
More specifically, Kubernetes is designed to accommodate configurations that meet all of the following criteria: No more than 110 pods per node. No more than 5000 nodes. No more than 150000 total pods.
A Node can have multiple pods, and the Kubernetes control plane automatically handles scheduling the pods across the Nodes in the cluster.
According to the AWS documentation IP addresses per network interface per instance type the t2.micro
only has 2
Network Interfaces and 2
IPv4 addresses per interface. So you are right, only 4 IP addresses.
But EKS deploys DaemonSets
for e.g. CoreDNS and kube-proxy, so some IP addresses on each node is already allocated.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With