I'm running Kubernetes 1.11, and trying to configure the Kubernetes cluster to check a local name server first. I read the instructions on the Kubernetes site for customizing CoreDNS, and used the Dashboard to edit the system ConfigMap for CoreDNS. The resulting corefile value is:
.:53 {
errors
health
kubernetes cluster.local in-addr.arpa ip6.arpa {
pods insecure
upstream 192.168.1.3 209.18.47.61
fallthrough in-addr.arpa ip6.arpa
}
prometheus :9153
proxy . /etc/resolv.conf
cache 30
reload
}
You can see the local address as the first upstream name server. My problem is that this doesn't seem to have made any impact. I have a container running with ping & nslookup, and neither will resolve names from the local name server.
I've worked around the problem for the moment by specifying the name server configuration in a few pod specifications that need it, but I don't like the workaround.
How do I force CoreDNS to update based on the changed ConfigMap? I can see that it is a Deployment in kube-system namespace, but I haven't found any docs on how to get it to reload or otherwise respond to a changed configuration.
One way to apply Configmap changes would be to redeploy CoreDNS pods:
kubectl rollout restart -n kube-system deployment/coredns
You can edit it in command line:
kubectl edit cm coredns -n kube-system
Save it and exit, which should reload it.
If it will not reload, as Emruz Hossain advised delete coredns:
kubectl get pods -n kube-system -oname |grep coredns |xargs kubectl delete -n kube-system
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With