Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Use same GCP Load Balancer to route between GCS bucket and GKE

I’ve been looking around trying to figure out if it would be possible to have a static React app hosted in a Google Cloud Storage bucket and use Google Cloud CDN and a single Google Cloud Load Balancer to route cache misses to the bucket, manage certs, and route internal request from the React app to an API hosted in GKE?

Would it be possible to achieve this architecture, or would there be another recommended approach?

like image 984
moku Avatar asked Feb 28 '20 03:02

moku


People also ask

What is the difference between external and internal load balancers in Google Cloud Platform?

External versus internal load balancing Global load balancing requires that you use the Premium Tier of Network Service Tiers. For regional load balancing, you can use Standard Tier. Internal load balancers distribute traffic to instances inside of Google Cloud.

Which load balancing option can be used if you want to distribute Web traffic to two applications in different parts of the world?

Internal HTTP(S) Load Balancing This load balancer provides internal proxy-based load balancing of Layer 7 application data. You specify how traffic is routed with URL maps.

Which three of the following resources can access an internal load balancer in GCP?

You can access an internal TCP/UDP load balancer in your VPC network from a connected network by using the following: VPC Network Peering. Cloud VPN and Cloud Interconnect.

What are the three categories of GCP load balancing?

Three of the six are designed for global load distribution and three are regional. We will start by considering the three global load balancer types. The three global types are the HTTP Load Balancer, the SSL Proxy, and the TCP Proxy. The first, is as the name suggests, meant for HTTP or HTTPS traffic.

Can I manage HTTP (S) load balancing outside of GKE?

If you need to manage an external HTTP (S) load balancer or internal HTTP (S) load balancer outside of GKE, use container native load balancing for standalone NEGs instead. HTTP (S) load balancing, configured by Ingress, includes the following features:

How does the GKE ingress load balancer work?

When you create the Ingress, the GKE ingress controller creates and configures an external HTTP (S) load balancer or internal HTTP (S) load balancer according to the information in the Ingress and the associated Services. Also, the load balancer is given a stable IP address that you can associate with a domain name.

What is the flow of load balancer in GCP?

Above-mentioned is the flow of Load Balancer of GCP as follows: request => forwarding-rules => target-http (s)-proxy => url-map => backend-service => instance-group => instance Follow the example above, you should be able to run services on your instance and get, process, respond requests properly.

What is a multi-region bucket for HTTP load balancing?

When you create a bucket to use as the backend for a HTTP (S) Load Balancing, we recommend that you choose a multi-region bucket, which automatically replicates objects across multiple Google Cloud regions. This can improve the availability of your content and improve failure tolerance across your application.


2 Answers

You can have a load balancer with (two or more) route matchers, one for api.example.com with a backend to GKE, and another for static.example.com with a backend bucket.

This backend bucket would have CDN enabled. You can point multiple routes to the same backend if needed.

Specifically:

  1. create a Kubernetes Service that is represented by a stand-alone Network Endpoint Group. This will allow you to manage the load balancer outside of GKE. Docs: https://cloud.google.com/kubernetes-engine/docs/how-to/standalone-neg

  2. Create a HTTP(S) Load Balancer, with the route(s) you want to match your API endpoint. Create a BackendService during the load balancer creation flow, and point it to the existing zonal Network Endpoint Group you created in step #1. docs: https://cloud.google.com/load-balancing/docs/https/https-load-balancer-example

  3. Create a BackendBucket in the same flow, pointing it at the bucket you want to use for storing your static React assets. Make sure to tick the “Enable Cloud CDN” box & create a route that sends traffic to that bucket. Docs: https://cloud.google.com/cdn/docs/using-cdn#enable_existing

  4. Finish creating the LB, which will assign IP addresses, and update DNS for your domain names to point at those IPs.

like image 90
elithrar Avatar answered Oct 24 '22 01:10

elithrar


The first thing to take into consideration with this approach is that, the CDN sits in front of the load balancer and not the other way around. This means that there is no routing involved at the CDN. Routing is done after the content is requested by the CDN cache.

Apart from that, the CDN starts caching the contents only after the first cache miss. This means that it needs to fetch the resource for the first time only after said resource is requested by a client.

If the resource is not already cached in the CDN, then it will be routed to the backend (through the load balancer) in order to retrieve it and to make a "local copy". Of course, this requires that the resource also exists in the backend in order for the CDN to cache it.

Your approach seem to assume that the CDN will act as a different kind of persistent layer, so I believe it is still possible, but not using Cloud CDN but a Cloud Storage bucket.

Since buckets have multi-regional classes, you might be able to achieve something really similar to what you're trying with the CDN.

Update:

Considering the new premise: Using the same load balancer to route requests between the static site hosted in a GCS bucket and the API deployed in GKE, with the CDN in front of it and with support for certificates.

Although the HTTP(S) Load Balancer can manage certificates, is compatible with the Cloud CDN, can have buckets or GCE instances as backends and is the default [Ingress] option in GKE (so it's also compatible with it), this approach doesn't seem feasible.

When you expose an application on GKE using the default ingress class (GCE) that deploys this kind of load balancer, the GKE cloud controller manager is in charge of that resource and relies on the definitions deployed to GKE.

If you try to manually manage the load balancer to add a new backend, in your case, the bucket containing your static application, the changes might be reversed if a new version of the Ingress is deployed to the cluster.

In the opposite case, where you manually create the load balancer and configure its backend to serve your bucket's content: There's no support for attaching this load balancer to the GKE cluster, it has to be created within Kubernetes.

So, in a nutshell: Either you use the load balancer with the bucket or with the GKE cluster, not both due to the aforementioned design.

This of course is completely possible if you deploy 2 different load balancer (ingress in terms of GKE) and put your CDN in front of the load balancer with the bucket. I mention this to contrast it with the information above.

Let me know if this helps :)

like image 2
yyyyahir Avatar answered Oct 24 '22 01:10

yyyyahir