Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

How to change the cluster.local default domain on kubernetes 1.9 deployed with kubeadm?

I would like to resolve the kube-dns names from outside of the Kubernetes cluster by adding a stub zone to my DNS servers. This requires changing the cluster.local domain to something that fits into my DNS namespace.

The cluster DNS is working fine with cluster.local. To change the domain I have modified the line with KUBELET_DNS_ARGS on /etc/systemd/system/kubelet.service.d/10-kubeadm.conf to read:

Environment="KUBELET_DNS_ARGS=--cluster-dns=x.y.z --cluster-domain=cluster.mydomain.local --resolv-conf=/etc/resolv.conf.kubernetes"

After restarting kubelet external names are resolvable but kubernetes name resolution failed.

I can see that kube-dns is still running with:

/kube-dns --domain=cluster.local. --dns-port=10053 --config-dir=/kube-dns-config --v=2

The only place I was able to find cluster.local was within the pods yaml configuration which reads:

  containers:
  - args:
    - --domain=cluster.local.
    - --dns-port=10053
    - --config-dir=/kube-dns-config
    - --v=2

After modifying the yaml and recreating the pod using

kubectl replace --force -f kube-dns.yaml

I still see kube-dns gettings started with --domain=cluster.local.

What am I missing?

like image 651
Marcus Avatar asked Jan 18 '18 17:01

Marcus


People also ask

What is cluster DNS in Kubernetes?

Kubernetes DNS schedules a DNS Pod and Service on the cluster, and configures the kubelets to tell individual containers to use the DNS Service's IP to resolve DNS names. Every Service defined in the cluster (including the DNS server itself) is assigned a DNS name.

How does CoreDNS work in Kubernetes?

About CoreDNS Like Kubernetes, the CoreDNS project is hosted by the CNCF. You can use CoreDNS instead of kube-dns in your cluster by replacing kube-dns in an existing deployment, or by using tools like kubeadm that will deploy and upgrade the cluster for you.


Video Answer


3 Answers

I had a similar problem where I have been porting a microservices based application to Kubernetes. Changing the internal DNS zone to cluster.local was going to be a fairly complex task that we didn't really want to deal with.

In our case, we switched from KubeDNS to CoreDNS, and simply enabled the coreDNS rewrite plugin to translate our our.internal.domain to ourNamespace.svc.cluster.local.

After doing this, the corefile part of our CoreDNS configmap looks something like this:

data:
  Corefile: |
    .:53 {
        errors
        health
        kubernetes cluster.local in-addr.arpa ip6.arpa {
          pods insecure
          upstream
          fallthrough in-addr.arpa ip6.arpa
        }
        prometheus :9153
        rewrite name substring our.internal.domain ourNamespace.svc.cluster.local
        proxy . /etc/resolv.conf
        cache 30

    }

This enables our kubernetes services to respond on both the default DNS zone and our own zone.

like image 54
simon Avatar answered Oct 16 '22 18:10

simon


I assume you are using CoreDNS.

You can change the cluster base DNS by editing the kubelet config file on ALL Nodes, located here /var/lib/kubelet/config.yaml or set the clusterDomain during kubeadm init.

Change

clusterDomain: cluster.local

to:

clusterDomain: my.new.domain

Now you also need to change the CoreDNS configuration. CoreDNS uses a ConfigMap for this. You can get your current CoreDNS ConfigMap by running

kubectl get -n kube-system cm/coredns -o yaml

Then change

kubernetes cluster.local in-addr.arpa ip6.arpa {
    ...
}

to match your new domain like this:

kubernetes my.new.domain in-addr.arpa ip6.arpa {
    ...
}

Now apply the changes to the CoreDNS ConfigMap. If you restart kubelet and your CoreDNS pods then your cluster should use the new domain.

If you have for example a service called grafana-service, this can now be accessed with this address: grafana-service.default.svc.my.new.domain

# kubectl get service
NAME              TYPE         CLUSTER-IP      EXTERNAL-IP   PORT(S)    AGE
grafana-service   ClusterIP    <Internal-IP>   <none>        3000/TCP   100m

# nslookup grafana-service.default.svc.my.new.domain
Server:    <Internal-IP>
Address 1: <Internal-IP> kube-dns.kube-system.svc.my.new.domain

Name:      grafana-service.default.svc.my.new.domain
Address 1: <Internal-IP> grafana-service.default.svc.my.new.domain
like image 5
Per Sunde Avatar answered Oct 16 '22 18:10

Per Sunde


I deployed internal instance of ingress controller, and added CNAME to coreDNS config. to deploy internal nginx-ingress

helm install int -f ./values.yml stable/nginx-ingress --namespace ingress-nginx

values.yaml:

controller:
  ingressClass: 'nginx-internal'
  reportNodeInternalIp: true
  service:
    enabled: true
    type: ClusterIP

to edit coreDNS config: KUBE_EDITOR=nano kubectl edit configmap coredns -n kube-system

My coredns file:

apiVersion: v1
data:
  Corefile: |
    .:53 {
        reload 5s
        log
        errors
        health {
          lameduck 5s
        }
        ready
        template ANY A int {
          match "^([^.]+)\.([^.]+)\.int\.$"
          answer "{{ .Name }} 60 IN CNAME int-nginx-ingress-controller.ingress-nginx.svc.cluster.local"
          upstream 127.0.0.1:53
        }
        template ANY CNAME int {
          match "^([^.]+)\.([^.]+)\.int\.$"
          answer "{{ .Name }} 60 IN CNAME int-nginx-ingress-controller.ingress-nginx.svc.cluster.local"
          upstream 127.0.0.1:53
        }
        kubernetes cluster.local in-addr.arpa ip6.arpa {
          pods insecure
          upstream
          fallthrough in-addr.arpa ip6.arpa
        }
        prometheus :9153
        forward . "/etc/resolv.conf"
        cache 30
        loop
        reload
        loadbalance
    }

kind: ConfigMap
metadata:
  annotations:
    kubectl.kubernetes.io/last-applied-configuration: |
      {"apiVersion":"v1","data":{"Corefile":".:53 {\n    errors\n    health {\n      lameduck 5s\n    }\n    ready\n    kubernetes >
  creationTimestamp: "2020-02-27T16:02:20Z"
  name: coredns
  namespace: kube-system
  resourceVersion: "16293672"
  selfLink: /api/v1/namespaces/kube-system/configmaps/coredns
  uid: 8f0ebf84-6451-4f9b-a6e1-c386d44f2d43

If you now add to ingress resource ..int domain, and add proper annotation to use nginx-internal ingress, you can have shorter domain, for example you can configure it like this in jenkins helm chart:

master:
  ingress:
    annotations:
      kubernetes.io/ingress.class: nginx-internal

    enabled: true
    hostName: jenkins.devtools.int
like image 2
Adrian Yutrowski Avatar answered Oct 16 '22 17:10

Adrian Yutrowski