Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

error: You must be logged in to the server - the server has asked for the client to provide credentials - "kubectl logs" command gives error

Tags:

kubernetes

We had setup kubernetes 1.10.1 on CoreOS with three nodes. Setup is successfull

NAME                STATUS    ROLES     AGE       VERSION
node1.example.com   Ready     master    19h       v1.10.1+coreos.0
node2.example.com   Ready     node      19h       v1.10.1+coreos.0
node3.example.com   Ready     node      19h       v1.10.1+coreos.0

NAMESPACE     NAME                                        READY     STATUS    RESTARTS   AGE
default            pod-nginx2-689b9cdffb-qrpjn       1/1       Running   0          16h
kube-system   calico-kube-controllers-568dfff588-zxqjj    1/1       Running   0          18h
kube-system   calico-node-2wwcg                           2/2       Running   0          18h
kube-system   calico-node-78nzn                           2/2       Running   0          18h
kube-system   calico-node-gbvkn                           2/2       Running   0          18h
kube-system   calico-policy-controller-6d568cc5f7-fx6bv   1/1       Running   0          18h
kube-system   kube-apiserver-x66dh                        1/1       Running   4          18h
kube-system   kube-controller-manager-787f887b67-q6gts    1/1       Running   0          18h
kube-system   kube-dns-79ccb5d8df-b9skr                   3/3       Running   0          18h
kube-system   kube-proxy-gb2wj                            1/1       Running   0          18h
kube-system   kube-proxy-qtxgv                            1/1       Running   0          18h
kube-system   kube-proxy-v7wnf                            1/1       Running   0          18h
kube-system   kube-scheduler-68d5b648c-54925              1/1       Running   0          18h
kube-system   pod-checkpointer-vpvg5                      1/1       Running   0          18h

But when i tries to see the logs of any pods kubectl gives the following error:

kubectl logs -f pod-nginx2-689b9cdffb-qrpjn error: You must be logged in to the server (the server has asked for the client to provide credentials ( pods/log pod-nginx2-689b9cdffb-qrpjn))

And also trying to get inside of the pods (using EXEC command of kubectl) gives following error:

kubectl exec -ti pod-nginx2-689b9cdffb-qrpjn bash error: unable to upgrade connection: Unauthorized

Kubelet Service File :

Description=Kubelet via Hyperkube ACI
[Service]
EnvironmentFile=/etc/kubernetes/kubelet.env
Environment="RKT_RUN_ARGS=--uuid-file-save=/var/run/kubelet-pod.uuid \
  --volume=resolv,kind=host,source=/etc/resolv.conf \
  --mount volume=resolv,target=/etc/resolv.conf \
  --volume var-lib-cni,kind=host,source=/var/lib/cni \
  --mount volume=var-lib-cni,target=/var/lib/cni \
  --volume var-log,kind=host,source=/var/log \
  --mount volume=var-log,target=/var/log"
ExecStartPre=/bin/mkdir -p /etc/kubernetes/manifests
ExecStartPre=/bin/mkdir -p /etc/kubernetes/cni/net.d
ExecStartPre=/bin/mkdir -p /etc/kubernetes/checkpoint-secrets
ExecStartPre=/bin/mkdir -p /etc/kubernetes/inactive-manifests
ExecStartPre=/bin/mkdir -p /var/lib/cni
ExecStartPre=/usr/bin/bash -c "grep 'certificate-authority-data' /etc/kubernetes/kubeconfig | awk '{print $2}' | base64 -d > /etc/kubernetes/ca.crt"
ExecStartPre=-/usr/bin/rkt rm --uuid-file=/var/run/kubelet-pod.uuid
ExecStart=/usr/lib/coreos/kubelet-wrapper \
  --kubeconfig=/etc/kubernetes/kubeconfig \
  --config=/etc/kubernetes/config \
  --cni-conf-dir=/etc/kubernetes/cni/net.d \
  --network-plugin=cni \
  --allow-privileged \
  --lock-file=/var/run/lock/kubelet.lock \
  --exit-on-lock-contention \
  --hostname-override=node1.example.com \
  --node-labels=node-role.kubernetes.io/master \
  --register-with-taints=node-role.kubernetes.io/master=:NoSchedule
ExecStop=-/usr/bin/rkt stop --uuid-file=/var/run/kubelet-pod.uuid
Restart=always
RestartSec=10
[Install]
WantedBy=multi-user.target

KubeletConfiguration File

kind: KubeletConfiguration
apiVersion: kubelet.config.k8s.io/v1beta1
staticPodPath: "/etc/kubernetes/manifests"
clusterDomain: "cluster.local"
clusterDNS: [ "10.3.0.10" ]
nodeStatusUpdateFrequency: "5s"
clientCAFile: "/etc/kubernetes/ca.crt"

We have also specified "--kubelet-client-certificate" and "--kubelet-client-key" flags into kube-apiserver.yaml files:

- --kubelet-client-certificate=/etc/kubernetes/secrets/apiserver.crt
- --kubelet-client-key=/etc/kubernetes/secrets/apiserver.key

So what we are missing here? Thanks in advance :)

like image 949
Ronak Pandya Avatar asked Apr 25 '18 07:04

Ronak Pandya


3 Answers

In my case the problem was that somehow context was changed. Checked it by

kubectl config current-context

and then changed it back to the correct one by

kubectl config use-context docker-desktop
like image 82
Timothy Avatar answered Sep 22 '22 03:09

Timothy


This is a quiet common and general error which is related to authentication problems against the API Server.

I beleive many people search for this title so I'll provide a few directions with examples for different types of cases.

1 ) (General)
Common to all types of deployments - check if credentials were expired.

2 ) (Pods and service accounts)
The authentication is related to one of the pods which is using a service account that has issues like invalid token.

3 ) (IoC or deployment tools)
Running with an IoC tool like Terraform and you failed to pass the certificate correctly like in this case.

4 ) (Cloud or other Sass providers)
A few cases which I encountered with AWS EKS:

4.A) In case you're not the cluster creator - you might have no permissions to access cluster.

When an EKS cluster is created, the user (or role) that creates the cluster is automatically granted with the system:master permissions in the cluster's RBAC configuration. Other users or roles that needs the ability to interact with your cluster, need to be added explicitly - Read more in here.

4.B) If you're working on multiple clusters/environments/accounts via the CLI, the current profile that is used needs to be re-authenticated or that there is a mismatch between the cluster that need to be accessed and the values of shell variables like: AWS_DEFAULT_PROFILE or AWS_DEFAULT_REGION.

4.C) New credentials (AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY) were created and exported to the terminal which might contain old values of previous session (AWS_SESSION_TOKEN) and need to be replaced or unset.


like image 30
RtmY Avatar answered Sep 19 '22 03:09

RtmY


Looks like you misconfigured kublet:

You missed the --client-ca-file flag in your Kubelet Service File

That’s why you can get some general information from master, but can’t get access to nodes.

This flag is responsible for certificate; without this flag, you can not get access to the nodes.

like image 31
Nick Rak Avatar answered Sep 21 '22 03:09

Nick Rak