Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Can't access EKS api server endpoint within VPC when private access is enabled

I have set up EKS cluser with "private access" enabled and set up one instance in the same VPC to communicate with EKS. The issue is if I enable to the "public access", I can access the api endpoint. But if I disable the public access and enable the private access, I can't access api endpoints.

When private access is enabled:

kubectl get svc
Unable to connect to the server: dial tcp: lookup randomstring.region.eks.amazonaws.com on 127.0.0.53:53: no such host

When public access is enabled:

kubectl get svc
NAME         TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)   AGE
kubernetes   ClusterIP   172.20.0.1   <none>        443/TCP   57m
like image 391
Nitesh Avatar asked Apr 04 '19 08:04

Nitesh


People also ask

How do I get the URL of the Kubernetes api server?

From inside the pod, kubernetes api server can be accessible directly on "https://kubernetes.default". By default it uses the "default service account" for accessing the api server. So, we also need to pass a "ca cert" and "default service account token" to authenticate with the api server.

What is EKS api server?

API server is used to manage your EKS cluster(such as kubectl ). By default, this API server endpoint is public to the internet. Access to the API server is secured with AWS IAM and native Kubernetes RBAC. EKS Cluster Public Endpoint Access.


1 Answers

I had to enable enableDnsHostnames and enableDnsSupport for my VPC.

When enabling the private access of a cluster, EKS creates a private hosted zone and associates with the same VPC. It is managed by AWS itself and you can't view it in your aws account. So, this private hosted zone to work properly, your VPC must have enableDnsHostnames and enableDnsSupport set to true.

Note: Wait for a while for changes to be reflected(about 5 minutes).

like image 154
Nitesh Avatar answered Oct 04 '22 00:10

Nitesh