Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Understanding --master-ipv4-cidr when provisioning private GKE clusters

I am trying to further understand what exactly is happening when I provision a private cluster in Google's Kubernetes Engine.

Google provides this example here of provisioning a private cluster where the control plane services (e.g. Kubernetes API) live on the 172.16.0.16/28 subnet.

https://cloud.google.com/kubernetes-engine/docs/how-to/private-clusters

gcloud beta container clusters create pr-clust-1 \
 --private-cluster \
 --master-ipv4-cidr 172.16.0.16/28 \
 --enable-ip-alias \
 --create-subnetwork ""

When I run this command, I see that:

  • I now have a few gke subnets in my VPC belong to the cluster subnets for nodes and services. These are in the 10.x.x.x/8 range.
  • I don't have any subnets in the 172.16/16 address space.
  • I do have some new pairing rules and routes that seem to be related. For example, there is a new route peering-route-a08d11779e9a3276 with a destination address range of 172.16.0.16/28 and next hop gke-62d565a060f347e0fba7-3094-3230-peer. This peering role then points to gke-62d565a060f347e0fba7-3094-bb01-net
gcloud compute networks subnets list | grep us-west1

#=>

default                     us-west1                 default  10.138.0.0/20
gke-insti3-subnet-62d565a0  us-west1                 default  10.2.56.0/22
gcloud compute networks peerings list

#=>

NAME                                     NETWORK  PEER_PROJECT              PEER_NETWORK                                        AUTO_CREATE_ROUTES  STATE   STATE_DETAILS
gke-62d565a060f347e0fba7-3094-3230-peer  default  gke-prod-us-west1-a-4180  gke-62d565a060f347e0fba7-3094-bb01-net              True                ACTIVE  [2018-08-23T16:42:31.351-07:00]: Connected.

Is gke-62d565a060f347e0fba7-3094-bb01-net a peered VPC in which the Kubernetes management endpoints live (the control plane stuff in the 172.16/16 range) that Google is managing for the GKE service?

Further - how are my requests making it to the Kubernetes API server?

like image 958
Chris Mutzel Avatar asked Aug 24 '18 00:08

Chris Mutzel


People also ask

How do private clusters work with VPCs?

For zonal clusters: the first private cluster you create in a zone generates a new VPC Network Peering connection to the cluster's VPC network. Additional zonal private clusters that you create in the same zone and VPC network use the same peering connection.

How do I create a GKE cluster with private nodes?

Create private DNS zone gcr.io with a CNAME record to gcr.io for *.gcr.io and A record to 199.36.153.4/30 for a blank gcr.io DNS name 5. Create GKE cluster with private nodes and private master 1. Create the VPC and Subnets Before we create any clusters, there needs to be a VPC and subnets in the environment to use for the cluster.

What IP ranges are supported for VPC-native clusters?

For more information, see IP ranges for VPC-native clusters. GKE versions 1.14.2 and later support any internal IP address ranges, including private ranges (RFC 1918 and other private ranges) and privately used public IP address ranges. See the VPC documentation for a list of valid internal IP address ranges.

What is the CIDR range of the GKE cluster?

This cluster is provisioned in a subnet whose CIDR range is 10.15.0.0/16 ( var.cidr is set to 10.15.0.0/16). Error waiting for creating GKE cluster: The given master_ipv4_cidr 10.15.0.16/28 overlaps with an existing network 10.15.0.0/16. "$ {cidrsubnet (var.cidr, 12, 1)}"


Video Answer


1 Answers

The Private Cluster feature of GKE depends on the Alias IP Ranges feature of VPC networking, so there are multiple things happening when you create a private cluster:

  • The --enable-ip-alias flag tells GKE to use a subnetwork that has two secondary IP ranges: one for pods and one for services. This allows the VPC network to understand all the IP addresses in your cluster and route traffic appropriately.

  • The --create-subnetwork flag tells GKE to create a new subnetwork (gke-insti3-subnet-62d565a0 in your case) and choose its primary and secondary ranges automatically. Note that you could instead choose the secondary ranges yourself with --cluster-ipv4-cidr and --services-ipv4-cidr. Or you could even create the subnetwork yourself and tell GKE to use it with the flags --subnetwork, --cluster-secondary-range-name, and --services-secondary-range-name.

  • The --private-cluster flag tells GKE to create a new VPC network (gke-62d565a060f347e0fba7-3094-bb01-net in your case) in a Google-owned project and connect it to your VPC network using VPC Network Peering. The Kubernetes management endpoints live in the range you specify with --master-ipv4-cidr (172.16.0.16/28 in your case). An Internal Load Balancer is also created in the Google-owned project and this is what your worker nodes communicate with. This ILB allows traffic to be load-balanced across multiple VMs in the case of a Regional Cluster. You can find this internal IP address as the privateEndpoint field in the output of gcloud beta container clusters describe. The important thing to understand is that all communication between master VMs and worker node VMs happens over internal IP addresses, thanks to the VPC peering between the two networks.

  • Your private cluster also has an external IP address, which you can find as the endpoint field in the output of gcloud beta container clusters describe. This is not used by the worker nodes, but is typically used by customers to manage their cluster remotely, e.g., using kubectl.

  • You can use the Master Authorized Networks feature to restrict which IP ranges (both internal and external) have access to the management endpoints. This feature is strongly recommended for private clusters, and is enabled by default when you create the cluster using the gcloud CLI.

Hope this helps!

like image 64
Alan Grosskurth Avatar answered Oct 12 '22 04:10

Alan Grosskurth