Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

How to get HTTPS on AKS without ingress

My problem is simple. I have an AKS deployment with a LoadBalancer service that needs to use HTTPS with a certificate.

How do I do this?

Everything I'm seeing online involves Ingress and nginx-ingress in particular.

But my deployment is not a website, it's a Dropwizard service with a REST API on one port and an admin service on another port. I don't want to map the ports to a path on port 80, I want to keep the ports as is. Why is HTTPS tied to ingress?

I just want HTTPS with a certificate and nothing more changed, is there a simple solution to this?

like image 972
Novaterata Avatar asked Oct 26 '18 14:10

Novaterata


People also ask

Does AKS have an ingress controller?

To see the ingress controller in action, run two demo applications in your AKS cluster. In this example, you use kubectl apply to deploy two instances of a simple Hello world application.

How do I access ingress from outside the AKS cluster?

Without a Kubernetes Ingress Resource, the service is not accessible from outside the AKS cluster. We will use the application and setup Ingress Resources to access the application through HTTP and HTTPS.

Can I run replicas of the ingress controller in Aks?

To fully benefit from running replicas of the ingress controller, make sure there's more than one node in your AKS cluster. The ingress controller also needs to be scheduled on a Linux node. Windows Server nodes shouldn't run the ingress controller.

How do I create an HTTPS ingress with my own certificate?

There’s no guidance on creating an HTTPS ingress with your own certificate and using a public static IP for the ingress controller. This is important because many companies/enterprises already have certificates for applications and/or are not ready for Let’s Encrypt.

How do I deploy two instances of an AKS application?

In this example, Helm is used to deploy two instances of a simple Hello world application. To see the ingress controller in action, run two demo applications in your AKS cluster. In this example, you use kubectl apply to deploy two instances of a simple Hello world application.


2 Answers

A sidecar container with nginx with the correct certificates (possible loaded off a Secret or a ConfigMap) will do the job without ingress. This seems to be a good example, using nginx-ssl-proxy container.

like image 90
Alessandro Vozza Avatar answered Oct 08 '22 17:10

Alessandro Vozza


Yes, that's right as of this writing an Ingress will currently work either on port 80 or port 443, potentially it can be extended to use any port because nginx, Traefik, haproxy, etc can all listen on different ports.

So you are down to either a LoadBalancer or a NodePort type of service. Type LoadBalancer will not work directly with TLS since the Azure load balancers are layer 4. So you will have to use Application Gateway and it's preferred to use an internal load balancer for security reasons.

Since you are using Azure you can run something like this (assuming that your K8s cluster is configured the right way to use the Azure cloud provider, either the --cloud-provider option or the cloud-controller-manager):

$ cat <<EOF
apiVersion: v1
kind: Service
metadata:
  name: your-app
  annotations:
    service.beta.kubernetes.io/azure-load-balancer-internal: "true"
spec:
  type: LoadBalancer
  ports:
  - port: <your-port>
  selector:
    app: your-app
EOF | kubectl apply -f -

and that will create an Azure load balancer on the port you like for your service. Behind the scenes, the load balancer will point to a port on the nodes and within the nodes, there will be firewall rules that will route to your container. Then you can configure Application Gateway. Here's a good article describing it but using port 80, you will have to change it use port 443 and configuring the TLS certs, and the Application Gateway also supports end to end TLS in case you want to terminate TLS directly on your app too.

The other option is NodePort, and you can run something like this:

$ kubectl expose deployment <deployment-name> --type=NodePort

Then Kubernetes will pick a random port on all your nodes where you can send traffic to your service listening on <your-port>. So, in this case, you will have to manually create a load balancer with TLS or a traffic source that listens on TLS <your-port> and forwards it to a NodePort on all your nodes, this load balancer can be anything like haproxy, nginx, Traefik or something else that supports terminating TLS. And you can also use the Application Gateway to forward directly to your node ports, in other words, define a listener that listens on the NodePort of your cluster.

like image 39
Rico Avatar answered Oct 08 '22 16:10

Rico