I see that kubernetes can use ClusterIP
and NodePort
and LoadBalancing
. For loadbalancing it requires cloud.
If I do not have cloud provider how can I loadbalance traffic between nodes?!
I know that HAProxy can loadbalance
but I think this cloud loadbalancer is different from simple HAProxy
and I want to know what is different between HAProxy and IngressController such as HAProxy and Nginx
I want a loadbalancer to loadbalance traffic between my worker nodes. A service loadbalance traffic between pods.I think ingress controller is layer 7 loadbalancer. I want loadbalancing between my nodes
In other words, Kubernetes services are themselves the crudest form of load balancing traffic. In Kubernetes the most basic type of load balancing is load distribution. Kubernetes uses two methods of load distribution. Both of them are easy to implement at the dispatch level and operate through the kube-proxy feature.
In Kubernetes, there are two types of Load Balancers: Internal Load Balancers - these enable routing across containers within the same Virtual Private Cloud while enabling service discovery and in-cluster load balancing.
An internal load balancer makes a Kubernetes service accessible only to applications running in the same virtual network as the Kubernetes cluster. This article shows you how to create and use an internal load balancer with Azure Kubernetes Service (AKS). Note.
I see that kubernetes can use ClusterIP and NodePort and LoadBalancing. For loadbalancing it requires cloud. If I do not have cloud provider how can I loadbalance traffic between nodes?!
The easiest way, as you probably know, would be to set the Service
to type NodePort
, this signals to kube-proxy
to listen to a random port in the default range of 30000-32767
on every node. Under the hood this random port will be mapped (port-forwarded) to the Service
port.
You can now send traffic, let's say to the random port of 30001
, to any of the nodes and you'll be load balanced internally between the Pods. If you now spin up a e.g. VM in the same network as the nodes or in a network that could reach the nodes and setup load balancing across node-{a,b,c}:30001
.
You could, although not recommeneded because of many good reasons, basically just send traffic to one of the nodes (node-a:30001
) in a multi node cluster and the traffic would still be load balanced internally. This is possible due to the fact that all instances of kube-proxy
knows where all the Pods (or Endpoints
in the context of a Service
) are located at any given time.
Note that the kube-proxy
and iptables
(may vary!) is the components that implements the Service
object in all cases besides when the type is LoadBalancer
. LoadBalancer
requests will be dispatched to either the built-in or external cloud controller manager.
The Ingress
object exists to add L7 logic in-front of one or more Service
's, but as you've seen the Ingress
is worthless if there's no Ingress Controller implementing it. HAProxy and Nginx Ingress controllers would more or less do the same thing for you but not solving your problem short-term. Yes, you'll have load balancing but not in the way you might think.
If you do not have any form of (private/public) cloud with k8s integration backing your k8s cluster up the Nginx and HAProxy Ingress controllers would only be another Service
running in your cluster. You would of course be able to do smart things like proxying, URL routing, hostname matching etc.
One of the questions to answer if you're in a non-cloud provider environment (e.g. bare metal only) is: How do i get an IP-address in the EXTERNAL-IP
field of a Service
of type LoadBalancer
? Note that i'm assuming the output of the kubectl get service
command. One good answer is, as already stated in the comments here: MetalLB.
MetalLB will give you automation in regards to configuring the external IP(s) of your Service
of type LoadBalancer
. But you could also configure the externalIPs
field of the Service
object manually and set it to an IP address that would make sense in your environment. Thanks @danielrubambura for pointing this out!
Also see this page over at the official Nginx controller documentation that could shed some light on how and why to use MetalLB in some circumstances.
I'm leaving the comparison between Nginx and HAProxy controllers since i don't think that is important in this case. In the end they'll give you Nginx or HAProxy Pods configured as you want through the Ingress
object, with e.g. routing to different Service
's based on the Host
header in the incoming requests.
Hopefully this clears things up a bit!
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With