We are trying to deploy a dot net core API service to amazon EKS using ECR. The deployment was successful, but the pods are in pending status. Below are the detailed steps we followed.
Steps followed. 1. Created a docker image 2. Pushed the image to ECR. The image is now visible in aws console also. // The image looks good, I was able to run it using my docker locally.
Created a t2-micro cluster as below eksctl create cluster --name net-core-prod --version 1.14 --region us-west-2 --nodegroup-name standard-workers --node-type t2.micro --nodes 1 --nodes-min 1 --nodes-max 1 –managed // Cluster and Node groups were created successfully. // IAM roles also got created
Deployed a replication controller using the attached json/yaml//net-app.json
The get all command returned this. //get_all.png POD always remains in PENDING status.
Pod describe gave the below result //describe_pod.png
Key points:
1. We are using a t2-micro instance cluster since it’s a AWS free account.
2. We created a linux cluster and tried to push the dotnet core app. //this worked fine in our local machine
3. The cluster had only 1 node //-nodes 1 --nodes-min 1 --nodes-max 1
Can somebody please guide us on how to set up this correctly.
Most managed Kubernetes services even impose hard limits on the number of pods per node: On Amazon Elastic Kubernetes Service (EKS), the maximum number of pods per node depends on the node type and ranges from 4 to 737.
large instance type that you are using for your worker nodes can host up to 29 Pods. You have two worker nodes in your cluster — that means you can run up to 58 Pods in your cluster.
To resolve it, double check the pod specification and ensure that the repository and image are specified correctly. If this still doesn't work, there may be a network issue preventing access to the container registry. Look in the describe pod text file to obtain the hostname of the Kubernetes node.
The issue is that you are using t2.micro
. At the minimum t2.small
is required. Scheduler is not able to schedule pod on the node because not enough capacity is available on the t2.micro
instance. Most of the capacity is already taken by the system resources. Use t2.small
at the minimum.
On Amazon Elastic Kubernetes Service (EKS
), the maximum number of pods per node depends on the node type and ranges from 4 to 737.
If you reach the max limit, you will see something like:
❯ kubectl get node -o yaml | grep pods
pods: "17" => this is allocatable pods that can be allocated in node
pods: "17" => this is how many running pods you have created
If you get only one number, it should be allocatable. Another way to count all running pods is to run the following command:
kubectl get pods --all-namespaces | grep Running | wc -l
Here's the list of max pods per node type: https://github.com/awslabs/amazon-eks-ami/blob/master/files/eni-max-pods.txt
On Google Kubernetes Engine (GKE
), the limit is 110 pods per node. check the following URL:
https://github.com/kubernetes/community/blob/master/sig-scalability/configs-and-limits/thresholds.md
On Azure Kubernetes Service (AKS
), the default limit is 30 pods per node but it can be increased up to 250. The default maximum number of pods per node varies between kubenet and Azure CNI networking, and the method of cluster deployment. check the following URL for more information:
https://learn.microsoft.com/en-us/azure/aks/configure-azure-cni#maximum-pods-per-node
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With