I am trying to further understand what exactly is happening when I provision a private cluster in Google's Kubernetes Engine.
Google provides this example here of provisioning a private cluster where the control plane services (e.g. Kubernetes API) live on the 172.16.0.16/28
subnet.
https://cloud.google.com/kubernetes-engine/docs/how-to/private-clusters
gcloud beta container clusters create pr-clust-1 \
--private-cluster \
--master-ipv4-cidr 172.16.0.16/28 \
--enable-ip-alias \
--create-subnetwork ""
When I run this command, I see that:
10.x.x.x/8
range.172.16/16
address space.peering-route-a08d11779e9a3276
with a destination address range of 172.16.0.16/28
and next hop gke-62d565a060f347e0fba7-3094-3230-peer
. This peering role then points to gke-62d565a060f347e0fba7-3094-bb01-net
gcloud compute networks subnets list | grep us-west1
#=>
default us-west1 default 10.138.0.0/20
gke-insti3-subnet-62d565a0 us-west1 default 10.2.56.0/22
gcloud compute networks peerings list
#=>
NAME NETWORK PEER_PROJECT PEER_NETWORK AUTO_CREATE_ROUTES STATE STATE_DETAILS
gke-62d565a060f347e0fba7-3094-3230-peer default gke-prod-us-west1-a-4180 gke-62d565a060f347e0fba7-3094-bb01-net True ACTIVE [2018-08-23T16:42:31.351-07:00]: Connected.
Is gke-62d565a060f347e0fba7-3094-bb01-net
a peered VPC in which the Kubernetes management endpoints live (the control plane stuff in the 172.16/16
range) that Google is managing for the GKE service?
Further - how are my requests making it to the Kubernetes API server?
For zonal clusters: the first private cluster you create in a zone generates a new VPC Network Peering connection to the cluster's VPC network. Additional zonal private clusters that you create in the same zone and VPC network use the same peering connection.
Create private DNS zone gcr.io with a CNAME record to gcr.io for *.gcr.io and A record to 199.36.153.4/30 for a blank gcr.io DNS name 5. Create GKE cluster with private nodes and private master 1. Create the VPC and Subnets Before we create any clusters, there needs to be a VPC and subnets in the environment to use for the cluster.
For more information, see IP ranges for VPC-native clusters. GKE versions 1.14.2 and later support any internal IP address ranges, including private ranges (RFC 1918 and other private ranges) and privately used public IP address ranges. See the VPC documentation for a list of valid internal IP address ranges.
This cluster is provisioned in a subnet whose CIDR range is 10.15.0.0/16 ( var.cidr is set to 10.15.0.0/16). Error waiting for creating GKE cluster: The given master_ipv4_cidr 10.15.0.16/28 overlaps with an existing network 10.15.0.0/16. "$ {cidrsubnet (var.cidr, 12, 1)}"
The Private Cluster feature of GKE depends on the Alias IP Ranges feature of VPC networking, so there are multiple things happening when you create a private cluster:
The --enable-ip-alias
flag tells GKE to use a subnetwork that has two secondary IP ranges: one for pods and one for services. This allows the VPC network to understand all the IP addresses in your cluster and route traffic appropriately.
The --create-subnetwork
flag tells GKE to create a new subnetwork (gke-insti3-subnet-62d565a0 in your case) and choose its primary and secondary ranges automatically. Note that you could instead choose the secondary ranges yourself with --cluster-ipv4-cidr
and --services-ipv4-cidr
. Or you could even create the subnetwork yourself and tell GKE to use it with the flags --subnetwork
, --cluster-secondary-range-name
, and --services-secondary-range-name
.
The --private-cluster
flag tells GKE to create a new VPC network (gke-62d565a060f347e0fba7-3094-bb01-net in your case) in a Google-owned project and connect it to your VPC network using VPC Network Peering. The Kubernetes management endpoints live in the range you specify with --master-ipv4-cidr
(172.16.0.16/28 in your case). An Internal Load Balancer is also created in the Google-owned project and this is what your worker nodes communicate with. This ILB allows traffic to be load-balanced across multiple VMs in the case of a Regional Cluster. You can find this internal IP address as the privateEndpoint
field in the output of gcloud beta container clusters describe
. The important thing to understand is that all communication between master VMs and worker node VMs happens over internal IP addresses, thanks to the VPC peering between the two networks.
Your private cluster also has an external IP address, which you can find as the endpoint
field in the output of gcloud beta container clusters describe
. This is not used by the worker nodes, but is typically used by customers to manage their cluster remotely, e.g., using kubectl
.
You can use the Master Authorized Networks feature to restrict which IP ranges (both internal and external) have access to the management endpoints. This feature is strongly recommended for private clusters, and is enabled by default when you create the cluster using the gcloud
CLI.
Hope this helps!
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With