Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

gRPC Load Balancing

I have read the Load Balancing page at https://github.com/grpc/grpc/blob/master/doc/load-balancing.md to start of, but still am confused as to the right approach to loadbalancing between back end GRPC instances. We are deploying multiple gRPC 'microservice' instances and want our other gRPC clients to be able to be routed between them. We are deploying these as pods in kubernetes (actually Google Container Engine).

Can anyone explain the 'recommended' approach to loadbalancing gRPC client requests between the gRPC servers? It seems that clients need to be aware of the endpoints - is it not possible to take advantage of the inbuilt LoadBalancer in Container Engine to help?

like image 565
user3707 Avatar asked May 03 '17 07:05

user3707


2 Answers

I can't talk about kubernetes, but regarding gRPC loadbalancing, there are basically two approaches:

  1. For simple usecases, you can enable round robin over the list of addresses returned for a given name (ie, the list of IPs returned for service.foo.com). The way to do this is language-dependent. For C++, you'd use grpc::ChannelArguments::SetLoadBalancingPolicyName with "round_robin" as the argument (in the future it'd also be possible to select via "service configuration", but the design for how to encode that config in DNS records hasn't been finalized yet).
  2. Use the grpclb protocol. This is suitable for more complex deployements. This feature required the c-ares DNS resolver, which #11237 introduces (this PR is very close to being merged). This is the piece that's missing for making grpclb work in open source. In particular:
    • Have a look at this document. It goes over the DNS configuration changes needed to control which addresses are marked as balancers. It's currently a "proposal", to be promoted to a doc shortly. It can be taken quite authoritatively, it's what #11237 is implementing for balancer discovery.
    • Write a regular gRPC server (in any language) implementing the load balancer protocol. This is the server to be marked in your DNS records as a balancer (as described in the aforementioned document), with which the client's grpclb will talk to to obtain the list of backend addresses (what's called server_lists). It's up to you to make the logic inside this balancer as simple or as complex as you want.
    • The client would use the DNS name of the balancer when creating a channel. Note also that your balancer DNS name may point to several addresses. If one or more of them are marked as balancers, grpclb will be used. Which balancer will be picked up if there's more than one? The first one the client connects to.

Let me know if you have any questions.

like image 190
David García Quintas Avatar answered Oct 13 '22 02:10

David García Quintas


For Load balancing between grpc server, kubernates default load balancing wont help as it is a L4 load balancer. you would be requiring L7 load balancer.

Why L7?

grpc uses http2 where connections are persistent and request will be sent through same connection. L4 load balancer will load balance using tcp connections, but u need a load balance at request level so we would require a L7 load balancer. especially when communication is between grpc servers.

there are couple of options, you could use Linkered/Envoy for this, they are work good with kubernates and provide a good service mesh also.

to expose your services to outside work you can use nghttpx and nghttpx Ingress controller.

you can also use client side load balancing, but i don't see a good merit in that.

like image 35
Samarendra Avatar answered Oct 13 '22 02:10

Samarendra