I am attempting to migrate my on premises cluster to GKE. In order to facilitate this transition I need to be able to resolve the names of legacy services.
Assume that the networking/VPN is a solved problem.
Is there are way to do this with GKE currently?
Effectively I am attempting to add a NS to every /etc/resolv.conf
I want to add to what Eric said, and mutate it a bit.
One of the realizations we had during the kubernetes 1.1 "settling period" is that there are not really specs for things like resolv.conf and resolver behavior. Different resolver libraries do different things, and this was causing pain for our users.
Specifically, some common resolvers assume that all nameserver
s are fungible and would break if you had nameservers that handled different parts of the DNS namespace. We made a decision that for kube 1.2 we will NOT pass multiple nameserver
lines into containers. Instead, we pass only the kube-dns server, which handles cluster.local
queries and forwards any other queries to an "upstream" nameserver.
How do we know what "upstream" is? We use the nameservers
of the node. There is a per-pod dnsPolicy field that governs this choice. The net result is that containers see a single nameserver
in resolv.conf, which we own, and that nameserver handles the whole DNS namespace.
What this practically means is that there's not a great hook for you to interject your own nameserver. You could change the --cluster-dns
flag to kubelets to point to your own DNS server, which would then forward to the kube-dns, which would then forward to "upstream". The problem is that GKE doesn't really support changing flags that way. If/when the node is updated, the flag will disappear in favor of the default.
Possible solutions:
Have kubelets read their flags from an in-cluster config. This is already plan of record, but is not in v1.2
Have kube-dns take a flag indicating what "upstream" is. Kube-dns is a "cluster addon" and as such isn't really mutable by end users (we will update it with your cluster and lose your changes).
Have kube-dns read its flags from an in-cluster config, and take a flag indicating what "upstream" is. This is a doable idea, but probably not for v1.2 (too late). It might be possible to patch this into a v1.2.x but it's not really a bugfix, it's a feature.
Get your own DNS server into the resolv.conf on each node so that kube-dns would use you as upstream. I don't think GKE has a way to configure this that won't also get lost on node upgrades. You could write a controller that periodically SSH'ed to VMs and wrote that out, and subsequently checked your kube-dns container for correctness. Blech.
I think the right answer is to use in-cluster configmaps to inform either kubelets or DNS (or both). If you think these might be workable answers (despite the timeframe issues), it would be great if you opened a GitHub issue to discuss. It will get more visibility there.
Effectively No.
If you modify the node's resolv.conf the pods will inherit the changes.
However, glibc prohibits using more than 3 nameservers or more than 6 search records.
GCE VMs use 2 nameservers and 3 searches for accessing node metadata and project networking. And GKE uses 1 nameserver and 3 searches. That leaves you 0 nameservers and 0 searches.
See this issue: https://github.com/kubernetes/kubernetes/issues/9079 and this issue: https://github.com/kubernetes/kubernetes/issues/9132
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With