I used kubeadm
to initialize my K8 master. However, I missed the --pod-network-cidr=10.244.0.0/16
flag to be used with flannel. Is there a way (or a config file) I can modify to reflect this subnet without carrying out the re-init process again?
Delete the old IP pool. Force ucp-kubelet and the other Kubernetes control plane containers under UCP to be recreated. Delete the existing container so that UCP creates new ones to replace them. The new Kubernetes control plane containers will use the new pod CIDR range from the updated UCP configuration.
kubeadm reset will not delete any etcd data if external etcd is used. This means that if you run kubeadm init again using the same etcd endpoints, you will see state from previous clusters.
Kubernetes assigns each node a range of IP addresses, a CIDR block, so that each Pod can have a unique IP address. The size of the CIDR block corresponds to the maximum number of Pods per node.
Override PodCIDR parameter on the all k8s Node resource with a IP source range 10.244.0.0/16
$ kubectl edit nodes nodename
Replace "Network" field under net-conf.json header in the relevant Flannel ConfigMap with a new network IP range:
$ kubectl edit cm kube-flannel-cfg -n kube-system
net-conf.json: | { "Network": "10.244.0.0/16", "Backend": { "Type": "vxlan" } }
Wipe current CNI network interfaces remaining the old network pool:
$ sudo ip link del cni0; sudo ip link del flannel.1
Re-spawn Flannel and CoreDNS pods respectively:
$ kubectl delete pod --selector=app=flannel -n kube-system
$ kubectl delete pod --selector=k8s-app=kube-dns -n kube-system
Wait until CoreDNS pods obtain IP address from a new network pool. Keep in mind that your custom Pods will still retain the old IP addresses inside containers unless you re-create them manually as well
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With