Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Kube-Flannel cant get CIDR although PodCIDR available on node

currently I am setting up Kubernetes on a 1 Master 2 Node enviorement.

I succesfully initialized the Master and added the nodes to the Cluster

kubectl get nodes

When I joined the Nodes to the cluster, the kube-proxy pod started succesfully, but the kube-flannel pod gets an error and runs into a CrashLoopBackOff.

flannel-pod.log:

I0613 09:03:36.820387       1 main.go:475] Determining IP address of default interface,
I0613 09:03:36.821180       1 main.go:488] Using interface with name ens160 and address 172.17.11.2,
I0613 09:03:36.821233       1 main.go:505] Defaulting external address to interface address (172.17.11.2),
I0613 09:03:37.015163       1 kube.go:131] Waiting 10m0s for node controller to sync,
I0613 09:03:37.015436       1 kube.go:294] Starting kube subnet manager,
I0613 09:03:38.015675       1 kube.go:138] Node controller sync successful,
I0613 09:03:38.015767       1 main.go:235] Created subnet manager: Kubernetes Subnet Manager - caasfaasslave1.XXXXXX.local,
I0613 09:03:38.015828       1 main.go:238] Installing signal handlers,
I0613 09:03:38.016109       1 main.go:353] Found network config - Backend type: vxlan,
I0613 09:03:38.016281       1 vxlan.go:120] VXLAN config: VNI=1 Port=0 GBP=false DirectRouting=false,
E0613 09:03:38.016872       1 main.go:280] Error registering network: failed to acquire lease: node "caasfaasslave1.XXXXXX.local" pod cidr not assigned,
I0613 09:03:38.016966       1 main.go:333] Stopping shutdownHandler...,

On the Node, I can verify that the PodCDIR is available:

kubectl get nodes -o jsonpath='{.items[*].spec.podCIDR}'
172.17.12.0/24

On the Masters kube-controller-manager, the pod cidr is also there

[root@caasfaasmaster manifests]# cat kube-controller-manager.yaml
apiVersion: v1
kind: Pod
metadata:
  annotations:
    scheduler.alpha.kubernetes.io/critical-pod: ""
  creationTimestamp: null
  labels:
    component: kube-controller-manager
    tier: control-plane
  name: kube-controller-manager
  namespace: kube-system
spec:
  containers:
  - command:
    - kube-controller-manager
    - --leader-elect=true
    - --controllers=*,bootstrapsigner,tokencleaner
    - --cluster-signing-key-file=/etc/kubernetes/pki/ca.key
    - --address=127.0.0.1
    - --use-service-account-credentials=true
    - --kubeconfig=/etc/kubernetes/controller-manager.conf
    - --root-ca-file=/etc/kubernetes/pki/ca.crt
    - --service-account-private-key-file=/etc/kubernetes/pki/sa.key
    - --cluster-signing-cert-file=/etc/kubernetes/pki/ca.crt
    - --allocate-node-cidrs=true
    - --cluster-cidr=172.17.12.0/24
    - --node-cidr-mask-size=24
    env:
    - name: http_proxy
      value: http://ntlmproxy.XXXXXX.local:3154
    - name: https_proxy
      value: http://ntlmproxy.XXXXXX.local:3154
    - name: no_proxy
      value: .XXXXX.local,172.17.11.0/24,172.17.12.0/24
    image: k8s.gcr.io/kube-controller-manager-amd64:v1.10.4
    livenessProbe:
      failureThreshold: 8
      httpGet:
        host: 127.0.0.1
        path: /healthz
        port: 10252
        scheme: HTTP
      initialDelaySeconds: 15
      timeoutSeconds: 15
    name: kube-controller-manager
    resources:
      requests:
        cpu: 200m
    volumeMounts:
    - mountPath: /etc/kubernetes/pki
      name: k8s-certs
      readOnly: true
    - mountPath: /etc/ssl/certs
      name: ca-certs
      readOnly: true
    - mountPath: /etc/kubernetes/controller-manager.conf
      name: kubeconfig
      readOnly: true
    - mountPath: /etc/pki
      name: ca-certs-etc-pki
      readOnly: true
  hostNetwork: true
  volumes:
  - hostPath:
      path: /etc/pki
      type: DirectoryOrCreate
    name: ca-certs-etc-pki
  - hostPath:
      path: /etc/kubernetes/pki
      type: DirectoryOrCreate
    name: k8s-certs
  - hostPath:
      path: /etc/ssl/certs
      type: DirectoryOrCreate
    name: ca-certs
  - hostPath:
      path: /etc/kubernetes/controller-manager.conf
      type: FileOrCreate
    name: kubeconfig
status: {}

XXXXX for anonymization

I initialized the master with the following kubeadm comman (which also went through without any errors)

kubeadm init --pod-network-cidr=172.17.12.0/24 --service- 
cidr=172.17.11.129/25 --service-dns-domain=dcs.XXXXX.local

Does anyone know what could cause my issues and how to fix them?

NAMESPACE     NAME                                                  READY     STATUS             RESTARTS   AGE       IP            NODE
kube-system   etcd-caasfaasmaster.XXXXXX.local                      1/1       Running            0          16h       172.17.11.1   caasfaasmaster.XXXXXX.local
kube-system   kube-apiserver-caasfaasmaster.XXXXXX.local            1/1       Running            1          16h       172.17.11.1   caasfaasmaster.XXXXXX.local
kube-system   kube-controller-manager-caasfaasmaster.XXXXXX.local   1/1       Running            0          16h       172.17.11.1   caasfaasmaster.XXXXXX.local
kube-system   kube-dns-75c5968bf9-qfh96                             3/3       Running            0          16h       172.17.12.2   caasfaasmaster.XXXXXX.local
kube-system   kube-flannel-ds-4b6kf                                 0/1       CrashLoopBackOff   205        16h       172.17.11.2   caasfaasslave1.XXXXXX.local
kube-system   kube-flannel-ds-j2fz6                                 0/1       CrashLoopBackOff   191        16h       172.17.11.3   caasfassslave2.XXXXXX.local
kube-system   kube-flannel-ds-qjd89                                 1/1       Running            0          16h       172.17.11.1   caasfaasmaster.XXXXXX.local
kube-system   kube-proxy-h4z54                                      1/1       Running            0          16h       172.17.11.3   caasfassslave2.XXXXXX.local
kube-system   kube-proxy-sjwl2                                      1/1       Running            0          16h       172.17.11.2   caasfaasslave1.XXXXXX.local
kube-system   kube-proxy-zc5xh                                      1/1       Running            0          16h       172.17.11.1   caasfaasmaster.XXXXXX.local
kube-system   kube-scheduler-caasfaasmaster.XXXXXX.local            1/1       Running            0          16h       172.17.11.1   caasfaasmaster.XXXXXX.local
like image 743
rootdoot Avatar asked Jun 13 '18 09:06

rootdoot


People also ask

How do I check my Kubernetes cluster CIDR?

In the dir there should be a file called config. json which has a json object "KubernetesConfig": {} with the following field: ServiceCIDR. The value of that should be the CIDR for your services.

What is CIDR block in Kubernetes?

Kubernetes assigns each node a range of IP addresses, a CIDR block, so that each Pod can have a unique IP address. The size of the CIDR block corresponds to the maximum number of Pods per node.

How to change flannel CIDR network pool in kubeadm?

I've managed to change Flannel CIDR network pool in the following way: Assuming that you have installed a fresh k8s cluster via kubeadm builder tool with adopting appropriate --pod-network-cidr flag in kubeadm init command: Override podCIDR parameter on the particular k8s Node resource with a new IP source range, desirable way with piping output:

Does Kubernetes cluster CIDR initial configuration use a /24 subnet?

So now we know for sure that our Kubernetes Cluster CIDR initial configuration was set up using a /24 subnet. kubectl get nodes -o jsonpath=' {.items [*].spec.podCIDR}' The above kubectl command should return something like this:

How do I re-spawn flannel and coredns pods from a different network pool?

Replace "Network" field under net-conf.json header in the relevant Flannel ConfigMap with a new network IP range: Wipe current CNI network interfaces remaining the old network pool: Re-spawn Flannel and CoreDNS pods respectively: Wait until CoreDNS pods obtain IP address from a new network pool.

How to find Pod network range in kubeadm?

To find the Pod network range, when you have defined your cluster using kubeadm 1.22 and used weave as your networking addon: first find the weave pod name using Copy any one name out of the three pod names. you will find either default i.e. 10.32.0.0/12 or whatever range you assigned during kubeadm init


1 Answers

Failed to acquire lease simply means, the pod didn't get the podCIDR. Happened with me as well although the manifest on master-node says podCIDR true but still it wasn't working and funnel going in crashbackloop. This is what i did to fix it.

From the master-node, first find out your funnel CIDR

sudo cat /etc/kubernetes/manifests/kube-controller-manager.yaml | grep -i cluster-cidr

Output:

- --cluster-cidr=172.168.10.0/24

Then run the following from the master node:

kubectl patch node slave-node-1 -p '{"spec":{"podCIDR":"172.168.10.0/24"}}'

where, slave-node-1 is your node where acquire lease is failing podCIDR is the cidr that you found in previous command

Hope this helps.

like image 107
PanDe Avatar answered Sep 21 '22 00:09

PanDe