I've searched but nothing has helped me through.
My Set.
k8s - v1.20.2.
calico - 3.16.6.
pod-cidr = 10.214.0.0/16.
service-cidr = 10.215.0.1/16.
Installed by kubespray with this one https://kubernetes.io/ko/docs/setup/production-environment/tools/kubespray
pod restarts again and again.
ingress-nginx-controller pod describe
[dns-autoscaler pod logs]
github.com/kubernetes-incubator/cluster-proportional-autoscaler/pkg/autoscaler/k8sclient/k8sclient.go:96: Failed to list *v1.Node: Get https://10.215.0.1:443/api/v1/nodes: dial tcp 10.215.0.1:443: i/o timeout
[dns-autoscaler pod describe]
kubelet Readiness probe failed: Get "http://10.214.116.129:8080/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
[coredns pod logs]
pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:125: Failed to list *v1.Namespace: Get "https://10.215.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.215.0.1:443: i/o timeout
[coredns pod describe]
Get "http://10.214.122.1:8080/health": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
And I tried to install ingress-nginx-controller it got me logs and describe.
[ingress-controller logs]
W0106 04:17:16.715661 6 flags.go:243] SSL certificate chain completion is disabled (--enable-ssl-chain-completion=false)
W0106 04:17:16.715911 6 client_config.go:541] Neither --kubeconfig nor --master was specified. Using the inClusterConfig. This might not work.
I0106 04:17:16.716200 6 main.go:182] Creating API client for https://10.215.0.1:
[ingress-controller describe]
Liveness probe failed: Get "https://10.214.233.2:8443/": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
All those pods are struggling with Readiness/Liveness probe failed: Get "http://10.214.116.155:10254/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers).
Calico is Running. and i checked pod to pod communication(OK).
calico is Running
[kubectl get componentstatuses]
controller-manager Unhealthy Get "http://127.0.0.1:10252/healthz": dial tcp 127.0.0.1:10252: connect: connection refused
scheduler Unhealthy Get "http://127.0.0.1:10251/healthz": dial tcp 127.0.0.1:10251: connect: connection refused
etcd-1 Healthy {"health":"true"}
etcd-2 Healthy {"health":"true"}
etcd-0 Healthy {"health":"true"}
kubectl get componentstatuses I followed How to resolve scheduler and controller-manager unhealthy state in Kubernetes and now scheduler and controller-manager are healthy.
[kubectl get nodes]
Nodes are ready.
what i did wrong? T.T.
thanks in advance
Experienced this issue when deploying an app to Kubernetes.
Warning Unhealthy 10m (x3206 over 3d16h) kubelet Liveness probe failed: Get "http://10.2.0.97:80/": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
I did an exec into the pod:
kubectl exec <pod-name> -it --namespace default /bin/bash
And then I ran a curl request to the IP and port of the pod:
curl 10.2.0.97:80
And it returned a successful response. But the liveness probe was still failing to execute successfully.
Here's how I solved it:
All I had to do was to increase the timeoutSeconds to 10:
livenessProbe:
httpGet:
path: /
port: http
initialDelaySeconds: 300
periodSeconds: 20
timeoutSeconds: 10
After which the liveness probe started executing successfully
Same can be done for the readiness probe:
ReadinessProbe:
httpGet:
path: /
port: http
initialDelaySeconds: 30
periodSeconds: 20
timeoutSeconds: 10
Reference: Sometime Liveness/Readiness Probes fail because of net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting head
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With