Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

CoreDNS pods stuck in ContainerCreating - Kubernetes

I am still new to Kubernetes and I was trying to set up a cluster on bare metal servers according to the official docu.

Right now I am running a one worker and one master node configuration, but I am struggling to run all the pods once the cluster initializes. The main problem is the coredns pods, that are stuck in the ContainerCreating state.

  NAMESPACE     NAME                                     READY   STATUS              RESTARTS   AGE
kube-system   coredns-78fcd69978-4vtsp                 0/1     ContainerCreating   0          5s
kube-system   coredns-78fcd69978-wtn2c                 0/1     ContainerCreating   0          12h
kube-system   etcd-dcpoth24213118                      1/1     Running             4          12h
kube-system   kube-apiserver-dcpoth24213118            1/1     Running             0          12h
kube-system   kube-controller-manager-dcpoth24213118   1/1     Running             0          12h
kube-system   kube-proxy-8282p                         1/1     Running             0          12h
kube-system   kube-scheduler-dcpoth24213118            1/1     Running             0          12h
kube-system   weave-net-6zz2j                          2/2     Running             0          12h

After checking the logs I've noticed this error. The problem is I don't really know what the error is refering to.

Events:
  Type     Reason                  Age                From               Message
  ----     ------                  ----               ----               -------
  Normal   Scheduled               19s                default-scheduler  Successfully assigned kube-system/coredns-78fcd69978-4vtsp to dcpoth24213118
  Warning  FailedCreatePodSandBox  13s                kubelet            Failed to create pod sandbox: rpc error: code = Unknown desc = [failed to set up sandbox container "2521c9dd723f3fc50b3510791a8c35cbc9ec19768468eb3da3367274a4dfcbba" network for pod "coredns-78fcd69978-4vtsp": networkPlugin cni failed to set up pod "coredns-78fcd69978-4vtsp_kube-system" network: error getting ClusterInformation: Get "https://[10.43.0.1]:443/apis/crd.projectcalico.org/v1/clusterinformations/default": dial tcp 10.43.0.1:443: connect: no route to host, failed to clean up sandbox container "2521c9dd723f3fc50b3510791a8c35cbc9ec19768468eb3da3367274a4dfcbba" network for pod "coredns-78fcd69978-4vtsp": networkPlugin cni failed to teardown pod "coredns-78fcd69978-4vtsp_kube-system" network: error getting ClusterInformation: Get "https://[10.43.0.1]:443/apis/crd.projectcalico.org/v1/clusterinformations/default": dial tcp 10.43.0.1:443: connect: no route to host]
  Normal   SandboxChanged          10s (x2 over 12s)  kubelet            Pod sandbox changed, it will be killed and re-created.

I've running the kuberenetes cluster behind a corporate proxy. I've set the environmental variables as follows.

export https_proxy=http://proxyIP:PORT
export http_proxy=http://proxyIP:PORT
export HTTP_PROXY="${http_proxy}"
export HTTPS_PROXY="${https_proxy}"
export NO_PROXY=localhost,127.0.0.1,master_node_IP,worker_node_IP,10.0.0.0/8,10.96.0.0/16

[root@dcpoth24213118 ~]# kubectl get svc -A
NAMESPACE     NAME         TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)                  AGE
default       kubernetes   ClusterIP   10.96.0.1    <none>        443/TCP                  12h
kube-system   kube-dns     ClusterIP   10.96.0.10   <none>        53/UDP,53/TCP,9153/TCP   12h


[root@dcpoth24213118 ~]# ip r s
default via 6.48.248.129 dev eth1
6.48.248.128/26 dev eth1 proto kernel scope link src 6.48.248.145
10.32.0.0/12 dev weave proto kernel scope link src 10.32.0.1
10.155.0.0/24 via 6.48.248.129 dev eth1
10.228.0.0/24 via 6.48.248.129 dev eth1
10.229.0.0/24 via 6.48.248.129 dev eth1
10.250.0.0/24 via 6.48.248.129 dev eth1

I've got weave network plugin installed. The issue is that I cannot create any other pods, all will get stuck in the ContainerCreating state.

I've run out of ideas how to fix it. Can someone give me a hint ?

like image 789
Martin Čičo Avatar asked Jan 28 '26 11:01

Martin Čičo


1 Answers

A little late to the party, but here's what fixed this problem for me, in 2023. In my case it was flannel's fault.

BAD: (many tutorials use this link)

kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

GOOD:

kubectl apply -f https://github.com/flannel-io/flannel/releases/latest/download/kube-flannel.yml

Before: (Container creating)

k get all --all-namespaces
NAMESPACE      NAME                                           READY   STATUS              RESTARTS      AGE
kube-flannel   pod/kube-flannel-ds-6krkg                      1/1     Running             0             11s
kube-flannel   pod/kube-flannel-ds-pgkzz                      1/1     Running             0             11s
kube-flannel   pod/kube-flannel-ds-sr9pt                      1/1     Running             0             11s
kube-system    pod/coredns-5dd5756b68-vjx9g                   0/1     ContainerCreating   0             86m
kube-system    pod/coredns-5dd5756b68-vvg2m                   0/1     ContainerCreating   0             86m
kube-system    pod/etcd-k8s-controlplane                      1/1     Running             1 (66m ago)   87m
kube-system    pod/kube-apiserver-k8s-controlplane            1/1     Running             1 (66m ago)   87m
kube-system    pod/kube-controller-manager-k8s-controlplane   1/1     Running             1 (66m ago)   86m
kube-system    pod/kube-proxy-dbmjq                           1/1     Running             0             85m
kube-system    pod/kube-proxy-lvtmt                           1/1     Running             0             85m
kube-system    pod/kube-proxy-n9q89                           1/1     Running             1 (66m ago)   86m
kube-system    pod/kube-scheduler-k8s-controlplane            1/1     Running             1 (66m ago)   87m

after the new flannel:

k get all --all-namespaces
NAMESPACE     NAME                                           READY   STATUS              RESTARTS   AGE
kube-system   pod/coredns-5dd5756b68-5b7hb                   1/1     Running             0          10m
kube-system   pod/coredns-5dd5756b68-czzrk                   1/1     Running             0          10m
kube-system   pod/etcd-k8s-controlplane                      1/1     Running             2          10m
kube-system   pod/kube-apiserver-k8s-controlplane            1/1     Running             2          10m
kube-system   pod/kube-controller-manager-k8s-controlplane   1/1     Running             2          10m
kube-system   pod/kube-proxy-5xcvh                           1/1     Running             0          9m22s
kube-system   pod/kube-proxy-ftbhc                           1/1     Running             0          10m
kube-system   pod/kube-proxy-lkdb6                           1/1     Running             0          9m28s
kube-system   pod/kube-scheduler-k8s-controlplane            1/1     Running             2          10m
like image 65
Anonymous User Avatar answered Jan 31 '26 03:01

Anonymous User