Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

pods is forbidden: User "system:serviceaccount:kubernetes-dashboard:admin-user" cannot list resource "pods" in API group "" in the namespace "default"

I am trying to setup Kubernetes on Ubuntu 18.04 by following this article.

Everything works fine but when I am trying to access local Kubernetes dashboard then it shows empty and nothing is visible like pods,services & deployments.

However when I am running $> kubectl get pods,svc,deployments then it shows following output.If command line is showing all the details why I am seeing empty Kubernetes dashboard?

I already ran following commands

$> kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.0-beta8/aio/deploy/recommended.yaml

$> kubectl proxy

Am I missing any configuration here? Any suggestions to fix this issue?

$> kubectl get pods --all-namespaces
NAMESPACE              NAME                                         READY   STATUS    RESTARTS   AGE

kubernetes-dashboard   dashboard-metrics-scraper-76585494d8-4rrdp   1/1     Running   3          46h
kubernetes-dashboard   kubernetes-dashboard-5996555fd8-sxgxf        1/1     Running   16         46h

After looking at the notification section, found these errors

  1. events is forbidden: User "system:serviceaccount:kubernetes-dashboard:admin-user" cannot list resource "events" in API group "" in the namespace "default"

  2. pods is forbidden: User "system:serviceaccount:kubernetes-dashboard:admin-user" cannot list resource "pods" in API group "" in the namespace "default"


Update 1:

its working now after applying RBAC kubectl apply -f filename.yml

apiVersion: v1
kind: ServiceAccount
metadata:
  name: admin-user
  namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
  name: admin-user
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
- kind: ServiceAccount
  name: admin-user
  namespace: kube-system
like image 272
Kundan Avatar asked Jan 05 '20 23:01

Kundan


2 Answers

You probably need to bind the dashboard service account to the cluster admin role:

kubectl create clusterrolebinding dashboard-admin-sa --clusterrole=cluster-admin --serviceaccount=default:dashboard-admin-sa

Otherwise, the dashboard services account doesn't have access to the data that would populate the dashboard.

like image 182
Nice-Guy Avatar answered Sep 20 '22 08:09

Nice-Guy


I am answering this based on my experience with v2.1.0 with K8s v1.20. When kubernetes-dashboard is installed, it created a service account and two roles called "kubernetes-dashboard" and binds the roles with the dashboard namespace and the other with a cluster-wide role (but not cluster-admin). So, unfortunately the permissions are not sufficient to manage the entire cluster, as can be seen here:

default account unable to see cluster data

Log from installation:

$ kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.1.0/aio/deploy/recommended.yaml
namespace/kubernetes-dashboard created
serviceaccount/kubernetes-dashboard created
service/kubernetes-dashboard created
secret/kubernetes-dashboard-certs created
secret/kubernetes-dashboard-csrf created
secret/kubernetes-dashboard-key-holder created
configmap/kubernetes-dashboard-settings created
role.rbac.authorization.k8s.io/kubernetes-dashboard created
clusterrole.rbac.authorization.k8s.io/kubernetes-dashboard created
rolebinding.rbac.authorization.k8s.io/kubernetes-dashboard created
clusterrolebinding.rbac.authorization.k8s.io/kubernetes-dashboard created
deployment.apps/kubernetes-dashboard created
service/dashboard-metrics-scraper created
deployment.apps/dashboard-metrics-scraper created

Looking at the permissions you see:

$ kubectl describe clusterrole kubernetes-dashboard
Name:         kubernetes-dashboard
Labels:       k8s-app=kubernetes-dashboard
Annotations:  <none>
PolicyRule:
Resources             Non-Resource URLs  Resource Names  Verbs
 ---------             -----------------  --------------  -----
nodes.metrics.k8s.io  []                 []              [get list watch]
pods.metrics.k8s.io   []                 []              [get list watch]

$ kubectl describe role kubernetes-dashboard -n kubernetes-dashboard
Name:         kubernetes-dashboard
Labels:       k8s-app=kubernetes-dashboard
Annotations:  <none>
PolicyRule:
Resources       Non-Resource URLs  Resource Names                     Verbs
---------       -----------------  --------------                     -----
secrets         []                 [kubernetes-dashboard-certs]       [get update delete]
secrets         []                 [kubernetes-dashboard-csrf]        [get update delete]
secrets         []                 [kubernetes-dashboard-key-holder]  [get update delete]
configmaps      []                 [kubernetes-dashboard-settings]    [get update]
services/proxy  []                 [dashboard-metrics-scraper]        [get]
services/proxy  []                 [heapster]                         [get]
services/proxy  []                 [http:dashboard-metrics-scraper]   [get]
services/proxy  []                 [http:heapster:]                   [get]
services/proxy  []                 [https:heapster:]                  [get]
services        []                 [dashboard-metrics-scraper]        [proxy]
services        []                 [heapster]                         [proxy]

Rather than making the kubernetes-dashboard service account a cluster-admin, as that account is used for data collection, a better approach is to create a new service account which only has a Token and that ways the account can easily be revoked instead of permissions changed for pre-created account.

To create a new service account called "dashboard-admin" and apply declaratively:

$ nano dashboard-svcacct.yaml

apiVersion: v1
kind: ServiceAccount
metadata:
name: dashboard-admin
namespace: kubernetes-dashboard

$ kubectl apply -f dashboard-svcacct.yaml
serviceaccount/dashboard-admin created

To bind that new service account to a cluster admin role:

$ nano dashboard-binding.yaml

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: dashboard-admin
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
- kind: ServiceAccount
  name: dashboard-admin
  namespace: kubernetes-dashboard

$ kubectl apply -f dashboard-binding.yaml
clusterrolebinding.rbac.authorization.k8s.io/dashboard-admin created

To extract the token from this service account which can be used to login:

$ kubectl -n kubernetes-dashboard describe secret $(kubectl -n kubernetes-dashboard get secret | grep dashboard-admin | awk '{print $1}')
Name:         dashboard-admin-token-4fxtt
Namespace:    kubernetes-dashboard
Labels:       <none>
Annotations:  kubernetes.io/service-account.name: dashboard-admin
              kubernetes.io/service-account.uid: 9cd5bb80-7901-413b-9eac-7b72c353d4b9

Type:  kubernetes.io/service-account-token

Data
====
ca.crt:     1066 bytes
namespace:  20 bytes
token:      eyJhbGciOiJSUzI1NiIsImtpZCI6Ikp3ZERpQTFPOV<REDACTED>

The entire token which starts with "eyJ" can be used to login now:

enter image description here

But cut & paste of the token login can become a pain in the rear, especially given default timeout. I prefer a config file. For this option the cluster CA hash will be needed. The cluster part of this this config file is the same as the config file under ~/.kube/config. This config file does not need to be loaded to the kubernetes master, just need it on the workstation with the browser from which the dashboard is being accessed. I named it dashboard-config and used VS Code to create it (any editor, just need to make sure that you unwrap the text to make sure no spaces in the hash values). There is no need to keep any of the admin CA and Private Key hashes under users: if copying the config file.

apiVersion: v1
clusters:
- cluster:
    certificate-authority-data: <CLUSTER CA HASH HERE>
    server: https://<IP ADDR OF CLUSTER>:6443
  name: kubernetes #name of cluster
contexts:
- context:
   cluster: kubernetes
   user: dashboard-admin
  name: dashboard-admin@kubernetes
current-context: dashboard-admin@kubernetes
kind: Config
preferences: {}
users:
- name: kubernetes-dashboard
  user:
    token: <TOKEN HASH from above command e.g. eyJ>

And it works now.

like image 27
FastGTR Avatar answered Sep 21 '22 08:09

FastGTR