My certificates were expired:
root@ubuntu:~# kubectl get pods
Unable to connect to the server: x509: certificate has expired or is not yet valid
I verified it by running:
root@ubuntu:~# kubeadm alpha certs check-expiration
[check-expiration] Reading configuration from the cluster...
[check-expiration] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[check-expiration] Error reading configuration from the Cluster. Falling back to default configuration
W0330 09:18:49.875780 12562 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubepro xy.config.k8s.io]
CERTIFICATE EXPIRES RESIDUAL TIME CERTIFICATE AUTHORITY EXTERNALLY MANAGED
admin.conf Mar 29, 2021 09:27 UTC <invalid> no
apiserver Mar 29, 2021 09:27 UTC <invalid> ca no
apiserver-etcd-client Mar 29, 2021 09:27 UTC <invalid> etcd-ca no
apiserver-kubelet-client Mar 29, 2021 09:27 UTC <invalid> ca no
controller-manager.conf Mar 29, 2021 09:27 UTC <invalid> no
etcd-healthcheck-client Mar 29, 2021 09:27 UTC <invalid> etcd-ca no
etcd-peer Mar 29, 2021 09:27 UTC <invalid> etcd-ca no
etcd-server Mar 29, 2021 09:27 UTC <invalid> etcd-ca no
front-proxy-client Mar 29, 2021 09:27 UTC <invalid> front-proxy-ca no
scheduler.conf Mar 29, 2021 09:27 UTC <invalid> no
CERTIFICATE AUTHORITY EXPIRES RESIDUAL TIME EXTERNALLY MANAGED
ca Mar 27, 2030 09:27 UTC 8y no
etcd-ca Mar 27, 2030 09:27 UTC 8y no
front-proxy-ca Mar 27, 2030 09:27 UTC 8y no
I renew the certificates by running: kubeadm alpha certs renew all
.
W0330 09:20:21.951839 13124 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
certificate embedded in the kubeconfig file for the admin to use and for kubeadm itself renewed
certificate for serving the Kubernetes API renewed
certificate the apiserver uses to access etcd renewed
certificate for the API server to connect to kubelet renewed
certificate embedded in the kubeconfig file for the controller manager to use renewed
certificate for liveness probes to healthcheck etcd renewed
certificate for etcd nodes to communicate with each other renewed
certificate for serving etcd renewed
certificate for the front proxy client renewed
certificate embedded in the kubeconfig file for the scheduler manager to use renewed
All the certificates are now updated to 2022 so it should be okay:
CERTIFICATE EXPIRES RESIDUAL TIME CERTIFICATE AUTHORITY EXTERNALLY MANAGED
admin.conf Mar 30, 2022 09:20 UTC 364d no
apiserver Mar 30, 2022 09:20 UTC 364d ca no
apiserver-etcd-client Mar 30, 2022 09:20 UTC 364d etcd-ca no
apiserver-kubelet-client Mar 30, 2022 09:20 UTC 364d ca no
controller-manager.conf Mar 30, 2022 09:20 UTC 364d no
etcd-healthcheck-client Mar 30, 2022 09:20 UTC 364d etcd-ca no
etcd-peer Mar 30, 2022 09:20 UTC 364d etcd-ca no
etcd-server Mar 30, 2022 09:20 UTC 364d etcd-ca no
front-proxy-client Mar 30, 2022 09:20 UTC 364d front-proxy-ca no
scheduler.conf Mar 30, 2022 09:20 UTC 364d no
CERTIFICATE AUTHORITY EXPIRES RESIDUAL TIME EXTERNALLY MANAGED
ca Mar 27, 2030 09:27 UTC 8y no
etcd-ca Mar 27, 2030 09:27 UTC 8y no
front-proxy-ca Mar 27, 2030 09:27 UTC 8y no
But when I run kubectl get pods
I received the error:
error: You must be logged in to the server (Unauthorized)
It should be a problem with the certificate I think, but I am not sure how to fix it. Should I create new certificate and replace the one that inside the config file?
You can renew your certificates manually at any time with the kubeadm certs renew command. This command performs the renewal using CA (or front-proxy-CA) certificate and key stored in /etc/kubernetes/pki . After running the command you should restart the control plane Pods.
What this says is that you can create or modify contexts in your kubeconfig file with the command kubectl config set-context. This command also accepts the name of the context to be changed (or --current if you want to change the current context), as well as --user, --cluster, and --namespace options.
The ~/.kube/config
wasn't updated with the changes.
I ran:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
and it fixed it.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With