Both Logs/ Exec commands are throwing tls error:
$ kubectl logs <POD-NAME>
Error from server: Get "https://<NODE-PRIVATE-IP>:10250/containerLogs/<NAMESPACE>/<POD-NAME>/<DEPLOYMENT-NAME>": remote error: tls: internal error
$ kubectl exec -it <POD-NAME> -- sh
Error from server: error dialing backend: remote error: tls: internal error
Check if your hostname type setting in the subnet and launch template configuration is using resource name and if that is the case then switch to using IP name instead. I think this is caused by some weird pattern matching going on with the AWS EKS control plane (as of v1.22) where it would not issue a certificate for a node if that node's hostname doesn't match its requirements. You can test this quickly by adding another node group to your cluster with the nodes' hostnames set to IP name.
I had the same problem with my cluster configured official documentation from AWS https://docs.aws.amazon.com/eks/latest/userguide/service_IAM_role.html and looks like following section doesn't work:
"Condition": {
"ArnLike": {
"aws:SourceArn": "arn:aws:eks:region-code:your-account-id:cluster/cluster-name"
}
}
Try removing this and restart containers.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With