I'm trying to create GKE Node Pool with Terraform
resource "google_container_node_pool" "node_pool" {
provider = google-beta
name = var.node_pool_name
location = var.region
cluster = var.cluster_name
node_count = var.k8s_workers_count
node_config {
machine_type = var.k8s_workers_shape
image_type = "COS"
service_account = google_service_account.sa.email
labels = {
name = var.node_pool_name
}
metadata = {
disable-legacy-endpoints = "true"
}
workload_metadata_config {
node_metadata = "GKE_METADATA_SERVER"
}
oauth_scopes = [
"https://www.googleapis.com/auth/cloud-platform",
"https://www.googleapis.com/auth/userinfo.email"
]
}
management {
auto_repair = true
auto_upgrade = true
}
}
After 3m TF console returning error message
Error: Error reading NodePool "pool1" from cluster "cluster-1": Nodepool "pool1" has status "PROVISIONING" with message ""
Gcloud cli returns that status indeed PROVISIONING
config:
diskSizeGb: 100
diskType: pd-standard
imageType: COS
labels:
name: pool1
machineType: n1-standard-4
metadata:
disable-legacy-endpoints: 'true'
oauthScopes:
- https://www.googleapis.com/auth/cloud-platform
- https://www.googleapis.com/auth/userinfo.email
serviceAccount:
shieldedInstanceConfig:
enableIntegrityMonitoring: true
initialNodeCount: 2
instanceGroupUrls:
- https://www.googleapis.com/compute/v1/projects/
management:
autoRepair: true
autoUpgrade: true
maxPodsConstraint:
maxPodsPerNode: '110'
name: pool1
podIpv4CidrSize: 24
selfLink: XXX
status: PROVISIONING
version: 1.13.11-gke.14
However console.cloud.google.com showing that status is green, and cluster allows me to create deployments etc. Any thoughts, Cloud Heroes?
UPDATE: 1:48PM 12/7/2019 - I was able to run TF script. Not sure if Google fixed it or I just got lucky.
If you disable then re-enable auto-provisioning on your cluster, existing node pools will not have auto-provisioning enabled. To re-enable auto-provisioning for these node pools, you need to mark individual node pools as auto-provisioned. It will work for new node-pools.
The node provisioning process includes, installing the operating system and required files onto the compute node, and then adding the compute node to the cluster. IBM Spectrum Cluster Foundation Community Edition uses DHCP and the TFTP services to add or reinstall nodes.
A node pool is a group of nodes within a cluster that all have the same configuration. Node pools use a NodeConfig specification. Each node in the pool has a Kubernetes node label, cloud.google.com/gke-nodepool , which has the node pool's name as its value.
There is some information about this issue on https://status.cloud.google.com/:
Newly created GKE node pools in asia-east1-a, asia-east2-c, asia-northeast1-a, asia-northeast2-c, asia-south1-a, asia-southeast1-a, australia-southeast1-a, europe-north1-c, europe-west1-c, europe-west2-a, europe-west3-a, europe-west4-a, europe-west6-c, northamerica-northeast1-c, southamerica-east1-a, us-central1-b, us-east1-a, us-east1-d, us-east2-a, us-east4-b, us-west1-a and us-west2-c are created successfully but incorrectly shown as PROVISIONING. A rollback underway will resolve this for new node pools.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With