Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Is there a way to add arbitrary records to kube-dns?

I will use a very specific way to explain my problem, but I think this is better to be specific than explain in an abstract way...

Say, there is a MongoDB replica set outside of a Kubernetes cluster but in a network. The ip addresses of all members of the replica set were resolved by /etc/hosts in app servers and db servers.

In an experiment/transition phase, I need to access those mongo db servers from kubernetes pods. However, kubernetes doesn't seem to allow adding custom entries to /etc/hosts in pods/containers.

The MongoDB replica sets are already working with large data set, creating a new replica set in the cluster is not an option.

Because I use GKE, changing any of resources in kube-dns namespace should be avoided I suppose. Configuring or replace kube-dns to be suitable for my need are last thing to try.

Is there a way to resolve ip address of custom hostnames in a Kubernetes cluster?

It is just an idea, but if kube2sky can read some entries of configmap and use them as dns records, it colud be great. e.g. repl1.mongo.local: 192.168.10.100.

EDIT: I referenced this question from https://github.com/kubernetes/kubernetes/issues/12337

like image 950
hiroshi Avatar asked May 11 '16 15:05

hiroshi


1 Answers

There are 2 possible solutions for this problem now:

  1. Pod-wise (Adding the changes to every pod needed to resolve these domains)
  2. cluster-wise (Adding the changes to a central place which all pods have access to, Which is in our case is the DNS)

Let's begin with the pod-wise solution:

As of Kunbernetes 1.7, It's possible now to add entries to a Pod's /etc/hosts directly using .spec.hostAliases

For example: to resolve foo.local, bar.local to 127.0.0.1 and foo.remote, bar.remote to 10.1.2.3, you can configure HostAliases for a Pod under .spec.hostAliases:

apiVersion: v1
kind: Pod
metadata:
  name: hostaliases-pod
spec:
  restartPolicy: Never
  hostAliases:
  - ip: "127.0.0.1"
    hostnames:
    - "foo.local"
    - "bar.local"
  - ip: "10.1.2.3"
    hostnames:
    - "foo.remote"
    - "bar.remote"
  containers:
  - name: cat-hosts
    image: busybox
    command:
    - cat
    args:
    - "/etc/hosts"

The Cluster-wise solution:

As of Kubernetes v1.12, CoreDNS is the recommended DNS Server, replacing kube-dns. If your cluster originally used kube-dns, you may still have kube-dns deployed rather than CoreDNS. I'm going to assume that you're using CoreDNS as your K8S DNS.

In CoreDNS it's possible to Add an arbitrary entries inside the cluster domain and that way all pods will resolve this entries directly from the DNS without the need to change each and every /etc/hosts file in every pod.

First:

Let's change the coreos ConfigMap and add required changes:

kubectl edit cm coredns -n kube-system 

apiVersion: v1
kind: ConfigMap
data:
  Corefile: |
    .:53 {
        errors
        health {
          lameduck 5s
        }
        hosts /etc/coredns/customdomains.db example.org {
          fallthrough
        }
        ready
        kubernetes cluster.local in-addr.arpa ip6.arpa {
          pods insecure
          fallthrough in-addr.arpa ip6.arpa
        }
        prometheus :9153
        forward . "/etc/resolv.conf"
        cache 30
        loop
        reload
        loadbalance
    }
  customdomains.db: |
    10.10.1.1 mongo-en-1.example.org
    10.10.1.2 mongo-en-2.example.org
    10.10.1.3 mongo-en-3.example.org
    10.10.1.4 mongo-en-4.example.org

Basically we added two things:

  1. The hosts plugin before the kubernetes plugin and used the fallthrough option of the hosts plugin to satisfy our case.

    To shed some more lights on the fallthrough option. Any given backend is usually the final word for its zone - it either returns a result, or it returns NXDOMAIN for the query. However, occasionally this is not the desired behavior, so some of the plugin support a fallthrough option. When fallthrough is enabled, instead of returning NXDOMAIN when a record is not found, the plugin will pass the request down the chain. A backend further down the chain then has the opportunity to handle the request and that backend in our case is kubernetes.

  2. We added a new file to the ConfigMap (customdomains.db) and added our custom domains (mongo-en-*.example.org) in there.

Last thing is to Remember to add the customdomains.db file to the config-volume for the CoreDNS pod template:

kubectl edit -n kube-system deployment coredns
volumes:
        - name: config-volume
          configMap:
            name: coredns
            items:
            - key: Corefile
              path: Corefile
            - key: customdomains.db
              path: customdomains.db

and finally to make kubernetes reload CoreDNS (each pod running):

$ kubectl rollout restart -n kube-system deployment/coredns
like image 75
0xMH Avatar answered Nov 02 '22 21:11

0xMH