I've been trying to deploy a self managed node EKS cluster for a while now, with no success. The error I'm stuck on now are EKS addons:
Error: error creating EKS Add-On (DevOpsLabs2b-dev-test--eks:kube-proxy): InvalidParameterException: Addon version specified is not supported, AddonName: "kube-proxy", ClusterName: "DevOpsLabs2b-dev-test--eks", Message_: "Addon version specified is not supported" } with module.eks-ssp-kubernetes-addons.module.aws_kube_proxy[0].aws_eks_addon.kube_proxy on .terraform/modules/eks-ssp-kubernetes-addons/modules/kubernetes-addons/aws-kube-proxy/main.tf line 19, in resource "aws_eks_addon" "kube_proxy":
This error repeats for coredns as well, but ebs_csi_driver throws:
Error: unexpected EKS Add-On (DevOpsLabs2b-dev-test--eks:aws-ebs-csi-driver) state returned during creation: timeout while waiting for state to become 'ACTIVE' (last state: 'DEGRADED', timeout: 20m0s) [WARNING] Running terraform apply again will remove the kubernetes add-on and attempt to create it again effectively purging previous add-on configuration
My main.tf looks like this:
terraform {
backend "remote" {}
required_providers {
aws = {
source = "hashicorp/aws"
version = ">= 3.66.0"
}
kubernetes = {
source = "hashicorp/kubernetes"
version = ">= 2.7.1"
}
helm = {
source = "hashicorp/helm"
version = ">= 2.4.1"
}
}
}
data "aws_eks_cluster" "cluster" {
name = module.eks-ssp.eks_cluster_id
}
data "aws_eks_cluster_auth" "cluster" {
name = module.eks-ssp.eks_cluster_id
}
provider "aws" {
access_key = "xxx"
secret_key = "xxx"
region = "xxx"
assume_role {
role_arn = "xxx"
}
}
provider "kubernetes" {
host = data.aws_eks_cluster.cluster.endpoint
cluster_ca_certificate = base64decode(data.aws_eks_cluster.cluster.certificate_authority.0.data)
token = data.aws_eks_cluster_auth.cluster.token
}
provider "helm" {
kubernetes {
host = data.aws_eks_cluster.cluster.endpoint
token = data.aws_eks_cluster_auth.cluster.token
cluster_ca_certificate = base64decode(data.aws_eks_cluster.cluster.certificate_authority.0.data)
}
}
My eks.tf looks like this:
module "eks-ssp" {
source = "github.com/aws-samples/aws-eks-accelerator-for-terraform"
# EKS CLUSTER
tenant = "DevOpsLabs2b"
environment = "dev-test"
zone = ""
terraform_version = "Terraform v1.1.4"
# EKS Cluster VPC and Subnet mandatory config
vpc_id = "xxx"
private_subnet_ids = ["xxx","xxx", "xxx", "xxx"]
# EKS CONTROL PLANE VARIABLES
create_eks = true
kubernetes_version = "1.19"
# EKS SELF MANAGED NODE GROUPS
self_managed_node_groups = {
self_mg = {
node_group_name = "DevOpsLabs2b"
subnet_ids = ["xxx","xxx", "xxx", "xxx"]
create_launch_template = true
launch_template_os = "bottlerocket" # amazonlinux2eks or bottlerocket or windows
custom_ami_id = "xxx"
public_ip = true # Enable only for public subnets
pre_userdata = <<-EOT
yum install -y amazon-ssm-agent \
systemctl enable amazon-ssm-agent && systemctl start amazon-ssm-agent \
EOT
disk_size = 10
instance_type = "t2.small"
desired_size = 2
max_size = 10
min_size = 0
capacity_type = "" # Optional Use this only for SPOT capacity as capacity_type = "spot"
k8s_labels = {
Environment = "dev-test"
Zone = ""
WorkerType = "SELF_MANAGED_ON_DEMAND"
}
additional_tags = {
ExtraTag = "t2x-on-demand"
Name = "t2x-on-demand"
subnet_type = "public"
}
create_worker_security_group = false # Creates a dedicated sec group for this Node Group
},
}
}
module "eks-ssp-kubernetes-addons" {
source = "github.com/aws-samples/aws-eks-accelerator-for-terraform//modules/kubernetes-addons"
eks_cluster_id = module.eks-ssp.eks_cluster_id
# EKS Addons
enable_amazon_eks_vpc_cni = true
enable_amazon_eks_coredns = true
enable_amazon_eks_kube_proxy = true
enable_amazon_eks_aws_ebs_csi_driver = true
#K8s Add-ons
enable_aws_load_balancer_controller = true
enable_metrics_server = true
enable_cluster_autoscaler = true
enable_aws_for_fluentbit = true
enable_argocd = true
enable_ingress_nginx = true
depends_on = [module.eks-ssp.self_managed_node_groups]
}
What exactly am I missing?
K8s is hard to get right sometimes. The examples on Github are shown for version 1.21 [1]. Because of that, if you leave only this:
enable_amazon_eks_vpc_cni = true
enable_amazon_eks_coredns = true
enable_amazon_eks_kube_proxy = true
enable_amazon_eks_aws_ebs_csi_driver = true
#K8s Add-ons
enable_aws_load_balancer_controller = true
enable_metrics_server = true
enable_cluster_autoscaler = true
enable_aws_for_fluentbit = true
enable_argocd = true
enable_ingress_nginx = true
Images that will be downloaded by default will be the ones for K8s version 1.21 as shown in [2]. If you really need to use K8s version 1.19, then you will have to find the corresponding Helm charts for that version. Here's an example of how you can configure the images you need [3]:
amazon_eks_coredns_config = {
addon_name = "coredns"
addon_version = "v1.8.4-eksbuild.1"
service_account = "coredns"
resolve_conflicts = "OVERWRITE"
namespace = "kube-system"
service_account_role_arn = ""
additional_iam_policies = []
tags = {}
}
However, the CoreDNS version here (addon_version = v1.8.4-eksbuild.1) is used with K8s 1.21. To check the version you would need for 1.19, go here [4]. TL;DR: the CoreDNS version you would need to specify is 1.8.0. In order to make the add-on work for 1.19, for CoreDNS (and other add-ons based on the image version), you would have to have a code block like this:
enable_amazon_eks_coredns = true
# followed by
amazon_eks_coredns_config = {
addon_name = "coredns"
addon_version = "v1.8.0-eksbuild.1"
service_account = "coredns"
resolve_conflicts = "OVERWRITE"
namespace = "kube-system"
service_account_role_arn = ""
additional_iam_policies = []
tags = {}
}
For other EKS add-ons, you can find more information here [5]. If you click on the links from the Name column it will lead you straight to the AWS EKS documentation with the add-on image versions supported for the EKS versions currently supported by AWS (1.17 - 1.21).
Last, but not the least, a friendly advice: never ever configure the AWS provider by hard-coding the access key and secret access key in the provider block. Use named profiles [6] or just use the default one. Instead of the block you have currently:
provider "aws" {
access_key = "xxx"
secret_key = "xxx"
region = "xxx"
assume_role {
role_arn = "xxx"
}
}
Switch to:
provider "aws" {
region = "yourdefaultregion"
profile = "yourprofilename"
}
[1] https://github.com/aws-samples/aws-eks-accelerator-for-terraform/blob/main/examples/eks-cluster-with-eks-addons/main.tf#L62
[2] https://github.com/aws-samples/aws-eks-accelerator-for-terraform/blob/main/modules/kubernetes-addons/aws-kube-proxy/local.tf#L5
[3] https://github.com/aws-samples/aws-eks-accelerator-for-terraform/blob/main/examples/eks-cluster-with-eks-addons/main.tf#L148-L157
[4] https://docs.aws.amazon.com/eks/latest/userguide/managing-coredns.html
[5] https://github.com/aws-samples/aws-eks-accelerator-for-terraform/blob/main/docs/add-ons/managed-add-ons.md
[6] https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-profiles.html
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With