Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Give cluster admin access to EKS worker nodes

We have a EKS cluster running the 1.21 version. We want to give admin access to worker nodes. We modified the aws-auth config map and added "system:masters" for eks worker nodes role. Below is the code snipped for the modified configmap.

data:
  mapRoles: |
    - groups:
      - system:nodes
      - system:bootstrappers
      - system:masters
      rolearn: arn:aws:iam::686143527223:role/terraform-eks-worker-node-role
      username: system:node:{{EC2PrivateDNSName}}

After adding this section, the EKS worker nodes successfully got admin access to the cluster. But in the EKS dashboard, the nodegroups are in a degraded state. It shows the below error in the Health issues section. Not able to update cluster due to this error. Please help.

Your worker nodes do not have access to the cluster. Verify if the node instance role is present and correctly configured in the aws-auth ConfigMap.

like image 663
abhinav tyagi Avatar asked Sep 04 '25 17:09

abhinav tyagi


1 Answers

During an issue such as this one, a quick way to get more details is by looking at the "Health issues" section on the EKS service page. As can be seen in the attached screenshot below, which has the same error in the description, there is an access permissions issue with the specific role eks-quickstart-test-ManagedNodeInstance.

enter image description here

The aforementioned role lacks permissions to the cluster and the same can be updated in the aws-auth.yaml configuration as described below:

  1. Run the following command from the role/user which created the EKS cluster:

kubectl get cm aws-auth -n kube-system -o yaml > aws-auth.yaml

  1. Add the role along with the required permissions such as system:masters in the mapRoles: section as shown below:
mapRoles: |
    - rolearn: arn:aws:iam::<AWS-AccountNumber>:role/eks-quickstart-test-ManagedNodeInstance
      username: system:node:{{EC2PrivateDNSName}}
      groups:
          - system:bootstrappers
          - system:nodes
          - system:masters
  1. Apply the updates to the cluster with the command:

kubectl apply -f aws-auth.yaml

This should resolve the permission issues and your cluster nodes should be visible as healthy and ready for pods to be scheduled.

like image 91
Vishwas M.R Avatar answered Sep 07 '25 18:09

Vishwas M.R