Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

How to setup up DNS and ingress-controllers for a public facing web app?

I'm trying to understand the concepts of ingress and ingress controllers in kubernetes. But I'm not so sure what the end product should look like. Here is what I don't fully understand:

Given I'm having a running Kubernetes cluster somewhere with a master node which runes the control plane and the etcd database. Besides that I'm having like 3 worker nodes - each of the worker nodes has a public IPv4 address with a corresponding DNS A record (worker{1,2,3}.domain.tld) and I've full control over my DNS server. I want that my users access my web application via www.domain.tld. So I point the the www CNAME to one of the worker nodes (I saw that my ingress controller i.e. got scheduled to worker1 one so I point it to worker1.domain.tld).

Now when I schedule a workload consisting of 2 frontend pods and 1 database pod with 1 service for the frontend and 1 service for the database. From what've understand right now, I need an ingress controller pointing to the frontend service to achieve some kind of load balancing. Two questions here:

  1. Isn't running the ingress controller only on one worker node pointless to internally load balance two the two frontend pods via its service? Is it best practice to run an ingress controller on every worker node in the cluster?

  2. For whatever reason the worker which runs the ingress controller dies and it gets rescheduled to another worker. So the ingress point will get be at another IPv4 address, right? From a user perspective which tries to access the frontend via www.domain.tld, this DNS entry has to be updated, right? How so? Do I need to run a specific kubernetes-aware DNS server somewhere? I don't understand the connection between the DNS server and the kubernetes cluster.

Bonus question: If I run more ingress controllers replicas (spread across multiple workers) do I do a DNS-round robin based approach here with multiple IPv4 addresses bound to one DNS entry? Or what's the best solution to achieve HA. I rather not want to use load balancing IP addresses where the worker share the same IP address.

like image 623
Jens Kohl Avatar asked Feb 04 '23 00:02

Jens Kohl


2 Answers

Given I'm having a running Kubernetes cluster somewhere with a master node which runes the control plane and the etcd database. Besides that I'm having like 3 worker nodes - each of the worker nodes has a public IPv4 address with a corresponding DNS A record (worker{1,2,3}.domain.tld) and I've full control over my DNS server. I want that my users access my web application via www.domain.tld. So I point the the www CNAME to one of the worker nodes (I saw that my ingress controller i.e. got scheduled to worker1 one so I point it to worker1.domain.tld).

Now when I schedule a workload consisting of 2 frontend pods and 1 database pod with 1 service for the frontend and 1 service for the database. From what've understand right now, I need an ingress controller pointing to the frontend service to achieve some kind of load balancing. Two questions here:

  1. Isn't running the ingress controller only on one worker node pointless to internally load balance two the two frontend pods via its service? Is it best practice to run an ingress controller on every worker node in the cluster?

Yes, it's a good practice. Having multiple pods for the load balancer is important to ensure high availability. For example, if you run the ingress-nginx controller, you should probably deploy it to multiple nodes.

  1. For whatever reason the worker which runs the ingress controller dies and it gets rescheduled to another worker. So the ingress point will get be at another IPv4 address, right? From a user perspective which tries to access the frontend via www.domain.tld, this DNS entry has to be updated, right? How so? Do I need to run a specific kubernetes-aware DNS server somewhere? I don't understand the connection between the DNS server and the kubernetes cluster.

Yes, the IP will change. And yes, this needs to be updated in your DNS server.

There are a few ways to handle this:

  1. assume clients will deal with outages. you can list all load balancer nodes in round-robin and assume clients will fallback. this works with some protocols, but mostly implies timeouts and problems and should generally not be used, especially since you still need to update the records by hand when k8s figures it will create/remove LB entries

  2. configure an external DNS server automatically. this can be done with the external-dns project which can sync against most of the popular DNS servers, including standard RFC2136 dynamic updates but also cloud providers like Amazon, Google, Azure, etc.

Bonus question: If I run more ingress controllers replicas (spread across multiple workers) do I do a DNS-round robin based approach here with multiple IPv4 addresses bound to one DNS entry? Or what's the best solution to achieve HA. I rather not want to use load balancing IP addresses where the worker share the same IP address.

Yes, you should basically do DNS round-robin. I would assume external-dns would do the right thing here as well.

Another alternative is to do some sort of ECMP. This can be accomplished by having both load balancers "announce" the same IP space. That is an advanced configuration, however, which may not be necessary. There are interesting tradeoffs between BGP/ECMP and DNS updates, see this dropbox engineering post for a deeper discussion about those.

Finally, note that CoreDNS is looking at implementing public DNS records which could resolve this natively in Kubernetes, without external resources.

like image 91
anarcat Avatar answered Feb 06 '23 14:02

anarcat


Isn't running the ingress controller only on one worker node pointless to internally load balance two the two frontend pods via its service? Is it best practice to run an ingress controller on every worker node in the cluster?

A quantity of replicas of the ingress will not affect the quality of load balancing. But for HA you can run more than 1 replica of the controller.

For whatever reason the worker which runs the ingress controller dies and it gets rescheduled to another worker. So the ingress point will get be at another IPv4 address, right? From a user perspective which tries to access the frontend via www.domain.tld, this DNS entry has to be updated, right? How so? Do I need to run a specific kubernetes-aware DNS server somewhere? I don't understand the connection between the DNS server and the kubernetes cluster.

Right, it will be on another IPv4. Yes, DNS should be updated for that. There are no standard tools for that included in Kubernetes. Yes, you need to run external DNS and somehow manage records on it manually (by some tools or scripts).

DNS server inside a Kubernetes cluster and your external DNS server are totally different things. DNS server inside the cluster provides resolving only inside the cluster for service discovery. Kubernetes does not know anything about access from external networks to the cluster, at least on bare-metal. In a cloud, it can manage some staff like load-balancers to automate external access management.

I run more ingress controllers replicas (spread across multiple workers) do I do a DNS-round robin based approach here with multiple IPv4 addresses bound to one DNS entry? Or what's the best solution to achieve HA.

DNS round-robin works in that case, but if one of the nodes is down, your clients will get a problem with connecting to that node, so you need to find some way to move/remove IP of that node.

The solutions for HA provided by @jjo is not the worst way to achieve what you want if you can prepare an environment for that. If not, you should choose something else, but the best practice is using a Load Balancer provided by an infrastructure. Will it be based on several dedicated servers, or load balancing IPs, or something else - it does not matter.

like image 23
Anton Kostenko Avatar answered Feb 06 '23 13:02

Anton Kostenko