Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

DNS with GKE Internal Load Balancers

I have two kubernetes clusters on GKE: one public that handles interaction with the outside world and one private for internal use only.

The public cluster needs to access some services on the private cluster and I have exposed these to the pods of the public cluster through internal load balancers. Currently I'm specifying the internal IP addresses for the load balancers to use and passing these IPs to the public pods, but I would prefer if the load balancers could choose any available internal IP addresses and I could pass their DNS names to the public pods.

Internal load balancer DNS is available for regular internal load balancers that serve VMs and the DNS will be of the form [SERVICE_LABEL].[FORWARDING_RULE_NAME].il4.[REGION].lb.[PROJECT_ID].internal, but is there something available for internal load balancers on GKE? Or is there a workaround that would enable me to accomplish something similar?

like image 288
tarikki Avatar asked Apr 25 '19 10:04

tarikki


2 Answers

Never heard of built-in DNS for load balancers in GKE, but we do it actually quite simply. We have External DNS Kubernetes service which manages DNS records for various things like load balancers and ingresses. What you may do:

  1. Create Cloud DNS internal zone. Make sure you integrate it with your VPC(s).
  2. Make sure your Kubernetes nodes service account has DNS Administrator (or super wide Editor) permissions.
  3. Install External DNS.
  4. Annotate your internal Load Balancer service with external-dns.alpha.kubernetes.io/hostname=your.hostname.here
  5. Verify that DNS record was created and can be resolved within your VPC.
like image 126
Vasili Angapov Avatar answered Oct 16 '22 22:10

Vasili Angapov


I doubt the "Internal load balancer DNS" route works, but here are some workarounds that come to mind:

1) Ingress: In your public cluster, map all private service names to an ingress controller in your private cluster. The ingress can route the requests per host name to the correct service.

2) Stub domains: Use some common postfix for your private services (for example *.private), and use the private cluster kube-dns to resolve those service names (see https://kubernetes.io/blog/2017/04/configuring-private-dns-zones-upstream-nameservers-kubernetes/)

Example:

apiVersion: v1
kind: ConfigMap
metadata:
  name: kube-dns
  namespace: kube-system
data:
  stubDomains: |
    {"private": ["10.2.3.4"]}
  upstreamNameservers: |
    ["8.8.8.8", "8.8.4.4"]

3) Haven't tried it, but kEdge seems to be another solution to securely communicate between clusters: https://improbable.io/blog/introducing-kedge-a-fresh-approach-to-cross-cluster-communication

like image 27
Markus Dresch Avatar answered Oct 16 '22 22:10

Markus Dresch