By default, any kubernetes pod on AWS EKS can assume the IAM role of the underlying node. That means all containers immediately get access to policies such as AmazonEKSWorkerNodePolicy and AmazonEC2ContainerRegistryReadOnly, which I want to avoid.
I don't want to block the AWS API entirely from all containers using iptables
because, given the proper credentials, it should be possible to make calls to it.
With IAM roles for service accounts, it's possible to associate a certain IAM role with the service account of the pod. But does that prevent a pod from assuming the IAM role of the underlying node?
Open the Amazon EKS console at https://console.aws.amazon.com/eks/home#/clusters . Choose the name of the cluster to display your cluster information. Choose the Networking tab and choose Update. For Private access, choose whether to enable or disable private access for your cluster's Kubernetes API server endpoint.
Security of the cloud – AWS is responsible for protecting the infrastructure that runs AWS services in the AWS Cloud. For Amazon EKS, AWS is responsible for the Kubernetes control plane, which includes the control plane nodes and etcd database.
AWS Identity and Access Management (IAM) is an AWS service that helps an administrator securely control access to AWS resources. IAM administrators control who can be authenticated (signed in) and authorized (have permissions) to use Amazon EKS resources.
The IAM Authenticator server runs on the control plane instances and the EKS API server is configured with an authentication webhook that directs the token in the request to the AWS IAM Authenticator server. User sends a request to the API server to obtain Kubernetes resource such as “get pods” (can use kubectl tool).
The two main things that could prevent it (if used together) and are described in the AWS documentation:
On top of that as pointed out in the documentation this depends on the CNI and in case you use Calico this is a nice write-up on the problem and mitigation with Calico network policies.
Another option is to use kube2iam.
I think this is best explained in the Official EKS Best Practices Guides > Security > Identity and Access Management (IAM) > Restrict access to the instance profile assigned to the worker node
Quoting from it
the pod can still inherit the rights of the instance profile assigned to the worker node
it is strongly recommended that you block access instance metadata to minimize the blast radius of a breach.
It doesn't matter if you are using IRSA (IAM Roles for Service Accounts) or not, it's good to block the access to the instance metadata from pods.
If a pod actually need IAM credentials then you should use IRSA (or other means get IAM credentials) that way you can be in line with least privilege principle.
To block the pod from getting the IAM credentials from the EKS node ec2 instance profile (iam role of the node) there are 3 alternatives mentioned in Restrict access to instance profile:
iptables
in the node.
NetworkPolicy
NetworkPolicy
that applies to all pods that blocks the access to 169.254.169.254/32 (the metadata discovery address)NetworkPolicy
to the specific pods that need access to it. This still violates the least privilege principle, because those pods will get all permissions of the node iam role which are already broad.The best option (IMHO) is to require IMDSv2 and hop count/hop limit 1 in the launch template HttpEndpoint=enable,HttpTokens=required,HttpPutResponseHopLimit=1
, the pods will still be able to make the request to the metadata discovery endpoint but they will never get the response because the response packets will be dropped at the first router (virtual) between the node and the pod.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With