I've created a GKE cluster with Terraform and I also want to manage Kubernetes with Terraform as well. However, I don't know how to pass GKE's credentials to the kubernetes
provider.
I followed the example in the google_client_config
data source documentation and I got
data.google_container_cluster.cluster.endpoint is null
Here is my failed attempt https://github.com/varshard/gke-cluster-terraform/tree/title-terraform
cluster.tf
is responsible for creating a GKE cluster, which work fine.
kubernetes.tf
is responsible for managing Kubernetes, which failed to get GKE credential.
You can use both Identity and Access Management (IAM) and Kubernetes RBAC to control access to your GKE cluster: IAM is not specific to Kubernetes; it provides identity management for multiple Google Cloud products, and operates primarily at the level of the Google Cloud project.
yml both in terraform and locally no problem. The issue is with the EOF and rendering the file business. Copy the content of your ingress. yaml file, open a terminal, write kubectl apply -f -<<EOF , press enter, paste your code, press enter again, write EOF and press enter.
Terraform defines declarative objects through Hashicorp Configuration Language, or HCL. With an HCL file, you can create resources that will run across multiple cloud platforms. Kubernetes, on the other hand, defines declarative objects as YAML or JSON files that illustrate how to define and manage Kubernetes objects.
You don't need the google_container_cluster
data source here at all because the relevant information is also in the google_container_cluster
resource that you are creating in the same context.
Data sources are for accessing data about a resource that is created either entirely outside of Terraform or in a different Terraform context (eg different state file and different directory that is terraform apply
'd).
I'm not sure how you're in your current state where the data source is selecting an existing container cluster and then you define a resource to create that container cluster using the outputs of the data source but this is way overcomplicated and slightly broken - if you destroyed everything and reapplied it wouldn't work as is.
Instead you should remove the google_container_cluster
data source and amend your google_container_cluster
resource to instead be:
resource "google_container_cluster" "cluster" {
name = "${var.project}-cluster"
location = var.region
# ...
}
And then refer to this resource in your kubernetes
provider:
provider "kubernetes" {
load_config_file = false
host = "https://${google_container_cluster.cluster.endpoint}"
cluster_ca_certificate = base64decode(google_container_cluster.cluster.master_auth.0.cluster_ca_certificate)
token = data.google_client_config.current.access_token
}
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With