We have a EKS cluster running the 1.21 version. We want to give admin access to worker nodes. We modified the aws-auth config map and added "system:masters"
for eks worker nodes role. Below is the code snipped for the modified configmap.
data:
mapRoles: |
- groups:
- system:nodes
- system:bootstrappers
- system:masters
rolearn: arn:aws:iam::686143527223:role/terraform-eks-worker-node-role
username: system:node:{{EC2PrivateDNSName}}
After adding this section, the EKS worker nodes successfully got admin access to the cluster. But in the EKS dashboard, the nodegroups are in a degraded state. It shows the below error in the Health issues section. Not able to update cluster due to this error. Please help.
Your worker nodes do not have access to the cluster. Verify if the node instance role is present and correctly configured in the aws-auth ConfigMap.
During an issue such as this one, a quick way to get more details is by looking at the "Health issues" section on the EKS service page. As can be seen in the attached screenshot below, which has the same error in the description, there is an access permissions issue with the specific role eks-quickstart-test-ManagedNodeInstance
.
The aforementioned role lacks permissions to the cluster and the same can be updated in the aws-auth.yaml
configuration as described below:
kubectl get cm aws-auth -n kube-system -o yaml > aws-auth.yaml
system:masters
in the mapRoles:
section as shown below:mapRoles: |
- rolearn: arn:aws:iam::<AWS-AccountNumber>:role/eks-quickstart-test-ManagedNodeInstance
username: system:node:{{EC2PrivateDNSName}}
groups:
- system:bootstrappers
- system:nodes
- system:masters
kubectl apply -f aws-auth.yaml
This should resolve the permission issues and your cluster nodes should be visible as healthy and ready for pods to be scheduled.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With