I have set up EKS cluser with "private access" enabled and set up one instance in the same VPC to communicate with EKS. The issue is if I enable to the "public access", I can access the api endpoint. But if I disable the public access and enable the private access, I can't access api endpoints.
When private access is enabled:
kubectl get svc
Unable to connect to the server: dial tcp: lookup randomstring.region.eks.amazonaws.com on 127.0.0.53:53: no such host
When public access is enabled:
kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 172.20.0.1 <none> 443/TCP 57m
From inside the pod, kubernetes api server can be accessible directly on "https://kubernetes.default". By default it uses the "default service account" for accessing the api server. So, we also need to pass a "ca cert" and "default service account token" to authenticate with the api server.
API server is used to manage your EKS cluster(such as kubectl ). By default, this API server endpoint is public to the internet. Access to the API server is secured with AWS IAM and native Kubernetes RBAC. EKS Cluster Public Endpoint Access.
I had to enable enableDnsHostnames
and enableDnsSupport
for my VPC.
When enabling the private access of a cluster, EKS creates a private hosted zone and associates with the same VPC. It is managed by AWS itself and you can't view it in your aws account. So, this private hosted zone to work properly, your VPC must have enableDnsHostnames
and enableDnsSupport
set to true
.
Note: Wait for a while for changes to be reflected(about 5 minutes).
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With