I've created a Kubernetes cluster using CoreOS on AWS and I'm having trouble communicating with nodes from the master.
For example, operations like kubectl exec
or kubectl logs
fail an error similar to the following:
Error from server: dial tcp: lookup ip-XXX-X-XXX-XXX.eu-west-1.compute.internal: no such host
I've found some issues on Github that describe the problem so I know the team knows about this bug, but I would like to ask here if its possible to use some workaround until it gets addressed somehow.
One workaround mentioned was to use the --hostname-override
flag but as I'm on AWS, this flag is ignored (see #22984)
Related issues on GitHub: #22770 #22063.
Have you made sure you're using the right context?
kubectl config use-context my-cluster-name
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With