I'm trying to launch a GKE cluster with a "custom" type network, vs. a "auto" type network.
I use the following command to launch my cluster:
$ gcloud container clusters create --cluster-ipv4-cidr=10.0.0.0/14 --network=ttest --subnetwork=ttest --num-nodes=1 jt
I get the following error:
Creating cluster jt...done.
ERROR: (gcloud.container.clusters.create) Operation [<Operation
name: u'operation-1467037655793-e319dc5e'
operationType: OperationTypeValueValuesEnum(CREATE_CLUSTER, 1)
selfLink: u'https://container.googleapis.com/v1/projects/TRUNCATED/zones/us-east1-b/operations/operation-1467037655793-e319dc5e'
status: StatusValueValuesEnum(DONE, 3)
statusMessage: u'Requested CIDR 10.0.0.0/14 is not available in network "ttest".'
targetLink: u'https://container.googleapis.com/v1/projects/TRUNCATED/zones/us-east1-b/clusters/jt'
zone: u'us-east1-b'>] finished with error: Requested CIDR 10.0.0.0/14 is not available in network "ttest".
It seems to want a network or subnetwork with a /14
address range, so the command should ideally work, but it doesn't.
Which is very odd, because here is what my networks look like:
The ttest network:
$ gcloud compute networks describe ttest
autoCreateSubnetworks: false
creationTimestamp: '2016-06-27T07:25:03.691-07:00'
id: '5404409453117999568'
kind: compute#network
name: ttest
selfLink: https://www.googleapis.com/compute/v1/projects/myproject/global/networks/ttest
subnetworks:
- https://www.googleapis.com/compute/v1/projects/myproject/regions/us-east1/subnetworks/ttest
x_gcloud_mode: custom
The ttest subnetwork:
$ gcloud compute networks subnets describe ttest
creationTimestamp: '2016-06-27T07:25:21.649-07:00'
gatewayAddress: 10.0.0.1
id: '6237639993374575038'
ipCidrRange: 10.0.0.0/14
kind: compute#subnetwork
name: ttest
network: https://www.googleapis.com/compute/v1/projects/myproject/global/networks/ttest
region: https://www.googleapis.com/compute/v1/projects/myproject/regions/us-east1
selfLink: https://www.googleapis.com/compute/v1/projects/myproject/regions/us-east1/subnetworks/ttest
I've tried the same thing with a manually created legacy network with --range=10.0.0.0/8
and then tried creating a cluster in that network, that doesn't seem to work either.
It would seem the /14 rule is hardcoded into the GKE configs somewhere, but I don't really know what it wants in the custom network to launch the containers.
The GKE container launch command works with any network where the mode/type is "auto".
I pored over whatever documentation seemed relevant to me, but without much luck. The only that sticks out the following snippet from this page:
The following restrictions exist when using subnetworks with other products:
- Google Managed VMs: Supported only on auto subnetwork networks. Cannot be deployed in a custom subnet networks.
Does GKE use Managed VMs under the hood? Is that what's causing the problem?
This page explains how to create a private Google Kubernetes Engine (GKE) cluster, which is a type of VPC-native cluster. In a private cluster, nodes only have internal IP addresses, which means that nodes and Pods are isolated from the internet by default.
This causes GKE to generate a subnet for your cluster. Clear the Enable control plane authorized networks checkbox. Click Create. In addition to the preceding configurations, you can run private clusters with the following configurations. To provide outbound internet access for your private nodes, you can use Cloud NAT.
Create the cluster with all default settings and name the cluster as "my-first-cluster". Cluster is created successfully with 3 default nodes, 6 CPUs (2 for each node) and 12GB memory. In GCP, we always first enable the service API before using any service. For example to use GKE, enable Kubernetes Engine API first.
To create a new private cluster with control plane global access enabled, perform the following steps: Go to the Google Kubernetes Engine page in Cloud Console. Go to Google Kubernetes Engine....
GKE does support custom subnet networks. The problem you're having is that GKE enforces that the cluster-ipv4-cidr
range is disjoint from all subnetworks in which VMs may have their IPs allocated from, because that would lead to ambiguity in where packets should be routed on the internal network.
cluster-ipv4-cidr
determines which CIDR ranges should be used for the containers in the cluster, while the subnet used determines which IP addresses are used for all VMs created in that network.
To fix the problem, simply stop specifying the --cluster-ipv4-cidr
flag in your gcloud command. GKE will then pick a safe cluster-ipv4-cidr
range for you.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With