I'm having trouble ensuring my pods re-connect to their PVs after an AWS EKS node group rolling upgrade. The issue is that the node itself moves from AZ us-west-2b to us-west-2c, but the PVs remain in us-west-2b.
The label on the node is topology.kubernetes.io/zone=us-west-2c and the label on the PV remains topology.kubernetes.io/zone=us-west-2b, so the volume affinity check warning shows up on the spinning pods after the upgrade finishes:
0/1 nodes are available: 1 node(s) had volume node affinity conflict.
Per the AWS upgrade docs:
When upgrading the nodes in a managed node group, the upgraded nodes are launched in the same Availability Zone as those that are being upgraded.
But that doesn't seem to be the case. Is there a way I can always enforce the creation of nodes into the same AZ they were in prior to the upgrade?
Note: this is a 1-node AWS EKS Cluster (with a max set to 3), though I don't think that should matter.
yes, this is possible, you need to enforce the AZ for the node group when you create it. When using kubectl you can use cli:
eksctl create cluster --name=cluster --zones=eu-central-2a,eu-central-2b --node-zones=eu-central-2a
When using terraform:
module "eks" {
source = "terraform-aws-modules/eks/aws"
version = "= 14.0.0"
cluster_version = "1.17"
cluster_name = "cluster-in-one-az"
subnets = ["subnet-a", "subnet-b", "subnet-c"]
worker_groups = [
{
instance_type = "m5.xlarge"
asg_max_size = 5
subnets = ["subnet-a"]
}
]
Where the subnet-a belongs to the desired availability zone where your PV was created.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With