Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Is there a way to not use GKE's standard load balancer?

I'm trying to use Kubernetes to make configurations and deployments explicitly defined and I also like Kubernetes' pod scheduling mechanisms. There are (for now) just 2 apps running on 2 replicas on 3 nodes. But Google's Kubernetes Engine's load balancer is extremely expensive for a small app like ours (at least for the moment) at the same time I'm not willing to change to a single instance hosting solution on a container or deploying the app on Docker swarm etc.

Using node's IP seemed like a hack and I thought that it might expose some security issues inside the cluster. Therefore I configured a Træfik ingress and an ingress controller to overcome Google's expensive flat rate for load balancing but turns out an outward facing ingress spins up a standart load balancer or I'm missing something.

I hope I'm missing something since at this rates ($16 a month) I cannot rationalize using kubernetes from start up for this app.

Is there a way to use GKE without using Google's load balancer?

like image 428
interlude Avatar asked Apr 08 '18 09:04

interlude


People also ask

Does load balancing reduce latency?

HTTP(S) Load Balancing significantly reduces the additional latency for a TLS handshake (typically 1–2 extra roundtrips). This is because HTTP(S) Load Balancing uses SSL offloading, and only the latency to the edge PoP is relevant.

Does ingress require load balancer?

The GKE Ingress controller creates and configures an HTTP(S) Load Balancer according to the information in the Ingress, routing all external HTTP traffic (on port 80) to the web NodePort Service you exposed. Note: To use Ingress, you must have the HTTP(S) Load Balancing add-on enabled.

What is native load balancer?

Native connection load balancing is a feature built into the Vertica server and client libraries that helps spread the CPU and memory overhead caused by client connections across the hosts in the database. It can prevent unequal distribution of client connections among hosts in the cluster.


1 Answers

An Ingress is just a set of rules that tell the cluster how to route to your services, and a Service is another set of rules to reach and load-balance across a set of pods, based on the selector. A service can use 3 different routing types:

  • ClusterIP - this gives the service an IP that's only available inside the cluster which routes to the pods.
  • NodePort - this creates a ClusterIP, and then creates an externally reachable port on every single node in the cluster. Traffic to those ports routes to the internal service IP and then to the pods.
  • LoadBalancer - this creates a ClusterIP, then a NodePort, and then provisions a load balancer from a provider (if available like on GKE). Traffic hits the load balancer, then a port on one of the nodes, then the internal IP, then finally a pod.

These different types of services are not mutually exclusive but actually build on each other, and it explains why anything public must be using a NodePort. Think about it - how else would traffic reach your cluster? A cloud load balancer just directs requests to your nodes and points to one of the NodePort ports. If you don't want a GKE load balancer then you can already skip it and access those ports directly.

The downside is that the ports are limited between 30000-32767. If you need standard HTTP port 80/443 then you can't accomplish this with a Service and instead must specify the port directly in your Deployment. Use the hostPort setting to bind the containers directly to port 80 on the node:

containers:
  - name: yourapp
    image: yourimage
    ports:
      - name: http
        containerPort: 80
        hostPort: 80 ### this will bind to port 80 on the actual node

This might work for you and routes traffic directly to the container without any load-balancing, but if a node has problems or the app stops running on a node then it will be unavailable.

If you still want load-balancing then you can run a DaemonSet (so that it's available on every node) with Nginx (or any other proxy) exposed via hostPort and then that will route to your internal services. An easy way to run this is with the standard nginx-ingress package, but skip creating the LoadBalancer service for it and use the hostPort setting. The Helm chart can be configured for this:

https://github.com/helm/charts/tree/master/stable/nginx-ingress

like image 108
Mani Gandham Avatar answered Sep 19 '22 11:09

Mani Gandham