Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Kubernetes, GCE, Load balancing, SSL

To preface this I’m working on the GCE, and Kuberenetes. My goal is simply to expose all microservices on my cluster over SSL. Ideally it would work the same as when you expose a deployment via type=‘LoadBalancer’ and get a single external IP. That is my goal but SSL is not available with those basic load balancers.

From my research the best current solution would be to set up an nginx ingress controller, use ingress resources and services to expose my micro services. Below is a diagram I drew up with my understanding of this process.

enter image description here

I’ve got this all to successfully work over HTTP. I deployed the default nginx controller from here: https://github.com/kubernetes/contrib/tree/master/ingress/controllers/nginx . As well as the default backend and service for the default backend. The ingress for my own micro service has rules set as my domain name and path: /.

This was successful but there were two things that were confusing me a bit.

  1. When exposing the service resource for my backend (microservice) one guide I followed used type=‘NodePort’ and the other just put a port to reach the service. Both set the target port to the backend app port. I tried this both ways and they both seemed to work. Guide one is from the link above. Guide 2: http://blog.kubernetes.io/2016/03/Kubernetes-1.2-and-simplifying-advanced-networking-with-Ingress.html. What is the difference here?

  2. Another point of confusion is that my ingress always gets two IPs. My initial thought process was that there should only be one external ip and that would hit my ingress which is then directed by nginx for the routing. Or is the ip directly to the nginx? Anyway the first IP address created seemed to give me the expected results where as visiting the second IP fails.

Despite my confusion things seemed to work fine over HTTP. Over HTTPS not so much. At first when I made a web request over https things would just hang. I opened 443 on my firewall rules which seemed to work however I would hit my default backend rather than my microservice.

Reading led me to this from Kubernetes docs: Currently the Ingress resource only supports http rules. This may explain why I am hitting the default backend because my rules are only for HTTP. But if so how am I supposed to use this approach for SSL?

Another thing I noticed is that if I write an ingress resource with no rules and give it my desired backend I still get directed to my original default backend. This is even more odd because kubectl describe ing updated and states that my default backend is my desired backend...

Any help or guidance would be much appreciated. Thanks!

like image 728
Steve Avatar asked Jan 02 '17 18:01

Steve


People also ask

How does SSL work with a load balancer?

If you use HTTPS (SSL or TLS) for your front-end listener, you must deploy an SSL/TLS certificate on your load balancer. The load balancer uses the certificate to terminate the connection and then decrypt requests from clients before sending them to the instances. The SSL and TLS protocols use an X.

Does each server behind a load balancer need their own SSL certificate?

If you do your load balancing on the TCP or IP layer (OSI layer 4/3, a.k.a L4, L3), then yes, all HTTP servers will need to have the SSL certificate installed.


2 Answers

To respond directly to your questions, since that's the whole point... Disclaimer: I'm a n00b, so take this all with a grain of salt.

With respect to #2, the blog post I link to below suggests the following architecture:

  • Create a deployment that deploys the nginx controller pods
  • Create a service with a type LoadBalancer and a static IP that routes traffic to the controller pods
  • Create an ingress resource that gets used by the nginx controller pods
  • Create a secret that gets used by the nginx controller pods to terminate SSL
  • And other stuff too

From what I understand, the http vs https stuff happens with the nginx controller pods. All of my ingress rules are also http, but the nginx ingress controller forces SSL and takes care of all that, terminating SSL at the controller so that everything below it, all the ingress stuff, can be HTTP. I have all http rules, but all of my traffic through the LoadBalancer service is getting forced to use SSL.

Again, I'm a n00b. Take this all with a grain of salt. I'm speaking in layman's terms because I'm a layman trying to figure this all out.

I came across your question while looking for some answers to my own questions. I ran into a lot of the same issues that you ran into (I'm assuming past tense given the amount of time that has passed). I wanted to point you (and/or others with similar issues) to a blog post that I found helpful when learning about the nginx controller. So far (I'm still at an early stage and in the middle of using the post), everything in the post has worked.

You're probably already past this stuff now being that it's been a few months. But maybe this will help someone else even if it doesn't help you:

https://daemonza.github.io/2017/02/13/kubernetes-nginx-ingress-controller/

It helped me understand what resources needed to be created, how to deploy the controller pods, and how to expose the controller pods (create a LoadBalancer service for the controller pods with a static IP), and also force SSL. It helped me jump over several hurdles and get past the "how do all the moving parts fit together".

The Kubernetes technical documentation is helpful for how to use each piece, but doesn't necessarily lay it all out and slap pieces together like this blog post does. Disclaimer: the model in the blog post might not be the best way to do it though (I don't have enough experience to make that call), but it did help me at least get a working example of an nginx ingress controller that forced SSL.

Hope this helps someone eventually.

Andrew

like image 26
Andrew Avatar answered Nov 15 '22 04:11

Andrew


So, for #2, you've probably ended up provisioning a Google HTTP(S) LoadBalancer, probably because you're missing the kubernetes.io/ingress.class: "nginx" annotation as described here: https://github.com/kubernetes/contrib/tree/master/ingress/controllers/nginx#running-multiple-ingress-controllers.

GKE has it's own ingress controller that you need to override by sticking that annotation on your nginx deployment. This article has a good explanation about that stuff.

The kubernetes docs have a pretty good description of what NodePort means - basically, the service will allocate a port from a high range on each Node in your cluster, and Nodes will forward traffic from that port to your service. It's one way of setting up load balancers in different environments, but for your approach it's not necessary. You can just omit the type field of your microservice's Service and it will be assigned the default type, which is ClusterIP.

As for SSL, it could be a few different things. I would make sure you've got the Secret set up just as they describe in the nginx controller docs, eg with a tls.cert and tls.key field.

I'd also check the logs of the nginx controller - find out which pod it's running as with kubectl get pods, and then tail it's logs: kubectl logs nginx-pod-<some-random-hash> -f. This will be helpful to find out if you've misconfigured anything, like if a service does not have any endpoints configured. Most of the time I've messed up the ingress stuff, it's been due to some pretty basic misconfiguration of Services/Deployments.

You'll also need to set up a DNS record for your hostname pointed at the LoadBalancer's static IP, or else ping your service with cURL's -H flag as they do in the docs, otherwise you might end up getting routed to the default back end 404.

like image 129
IanI Avatar answered Nov 15 '22 05:11

IanI