I am installing Kubernetes on Oracle Virtualbox in my laptop using Kubeadm . Everything worked fine till i run this command on Kuberenets Worker node to join with Master node I got the error after running
sudo kubeadm join 192.168.56.100:6443 --token 0i2osm.vsp2mk63v1ypeyjf --discovery-token-ca-cert-hash sha256:18511321fcc4b622628dd1ad2f56dbdd319bf024740d58127818720828cc7bf0
Error
[preflight] Running pre-flight checks
[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
error execution phase preflight: [preflight] Some fatal errors occurred:
[ERROR DirAvailable--etc-kubernetes-manifests]: /etc/kubernetes/manifests is not empty
[ERROR FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
[ERROR Port-10250]: Port 10250 is in use
[ERROR FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
I tried deleting files manually and ran the command again but it didnt resolve the port issue . and whenever i stop the kubectl which is running on 10250 port and then run the command it gives error to " kubectl needs to be started and when i start the kubectl then it gives error for port 10250 is in use "
Its a kind of chicken and egg thing
Any views on how i can resolve it ?
you should first try
#kubeadm reset
because you already have kubernetes it gets error.
Regarding kubeadm reset
:
1 ) As describe here:
The "reset" command executes the following phases:
preflight Run reset pre-flight checks
update-cluster-status Remove this node from the ClusterStatus object.
remove-etcd-member Remove a local etcd member.
cleanup-node Run cleanup node.
So I recommend to run the preflight
phase first (by using the --skip-phases
flag) before executing the all phases together.
2 ) When you execute the cleanup-node
phase you can see that the following steps are being logged:
.
.
[reset] Stopping the kubelet service
[reset] Unmounting mounted directories in "/var/lib/kubelet"
[reset] Deleting contents of config directories: [
/etc/kubernetes/manifests
/etc/kubernetes/pki
]
[reset] Deleting files: [
/etc/kubernetes/admin.conf
/etc/kubernetes/kubelet.conf
/etc/kubernetes/bootstrap-kubelet.conf
/etc/kubernetes/controller-manager.conf
/etc/kubernetes/scheduler.conf
]
.
.
Let's go over the [reset]
entries and see how they solve the 4 errors you mentioned:
A ) The first [reset]
entry will fix the Port 10250 is in use
issue (kubelet was listening on this port).
B ) The fourth [reset]
entry will fix the two errors of /etc/kubernetes/manifests is not empty
and /etc/kubernetes/kubelet.conf already exists
.
C ) And we're left with the /etc/kubernetes/pki/ca.crt already exists
error.
I thought that the third [reset]
entry of removing /etc/kubernetes/pki
should take care of that.
But, in my case when I ran the kubeadm join
with verbosity level of 5 (by appending the --v=5
flag) I encounter the error below:
I0929 ... checks.go:432] validating if ...
[preflight] Some fatal errors occurred:
[ERROR FileAvailable-etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
So I had to remove the /etc/kubernetes/pki
folder manually and then the kubeadm join
was successful again.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With