Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

kubectl logs -f gets "Authorization error"

I recently created a cluster on EKS with eksctl. kubectl logs -f mypod-0 bumps into Authorization error:

Error from server (InternalError): Internal error occurred: Authorization error (user=kube-apiserver-kubelet-client, verb=get, resource=nodes, subresource=proxy) Any advice and insight is appreciated

like image 438
Kok How Teh Avatar asked Feb 07 '19 11:02

Kok How Teh


People also ask

How do I check Kubernetes error log?

To get Kubectl pod logs, you can access them by adding the -p flag. Kubectl will then get all of the logs stored for the pod. This includes lines that were emitted by containers that were terminated.

What is authorization in Kubernetes?

Kubernetes authorizes API requests using the API server. It evaluates all of the request attributes against all policies and allows or denies the request. All parts of an API request must be allowed by some policy in order to proceed. This means that permissions are denied by default.

How do I follow kubectl logs?

The default logging tool is the command ( kubectl logs ) for retrieving logs from a specific pod or container. Running this command with the --follow flag streams logs from the specified resource, allowing you to live tail its logs from your terminal.


3 Answers

You would need to create a ClusterRoleBinding with a Role pointing towards the user : kube-apiserver-kubelet-client

kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: kubelet-api-admin
subjects:
- kind: User
  name: kube-apiserver-kubelet-client
  apiGroup: rbac.authorization.k8s.io
roleRef:
  kind: ClusterRole
  name: system:kubelet-api-admin
  apiGroup: rbac.authorization.k8s.io

kubelet-api-admin is usually a role that has the necessary permissions, but you can replace this with an apt role.

like image 24
C0d3ine Avatar answered Oct 23 '22 03:10

C0d3ine


On a prem cluster, I had an issue where I changed the DNS address of the master. You will need to change the dns name in the /etc/kubernetes/kubelet.conf on each node then sudo systemctl restart kublet.service.

like image 111
jmcgrath207 Avatar answered Oct 23 '22 03:10

jmcgrath207


I could solve this issue by editing the aws-auth configmap. I added the clusterrole system:node in the worker role.

apiVersion: v1
data:
  mapRoles: |
    - rolearn: 'WORKER ROLE'
      username: 'NAME'
      groups:
        - ...
        - system:nodes
like image 1
pcampana Avatar answered Oct 23 '22 04:10

pcampana