Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

kubernetes pods/nodes is forbidden

I'm using kubernetes on-prem

While I build gitlab using kubernetes has some problem. I think it's related with serviceaccount or role-binding. but couldn't find correct way

I found these posts

Kubernetes log, User "system:serviceaccount:default:default" cannot get services in the namespace

https://github.com/kubernetes/kops/issues/3551

my error logs

==> /var/log/gitlab/prometheus/current <==
2018-12-24_03:06:08.88786 level=error ts=2018-12-24T03:06:08.887812767Z caller=main.go:240 component=k8s_client_runtime err="github.com/prometheus/prometheus/discovery/kubernetes/kubernetes.go:372: Failed to list *v1.Node: nodes is forbidden: User \"system:serviceaccount:default:default\" cannot list resource \"nodes\" in API group \"\" at the cluster scope"
2018-12-24_03:06:08.89075 level=error ts=2018-12-24T03:06:08.890719525Z caller=main.go:240 component=k8s_client_runtime err="github.com/prometheus/prometheus/discovery/kubernetes/kubernetes.go:320: Failed to list *v1.Pod: pods is forbidden: User \"system:serviceaccount:default:default\" cannot list resource \"pods\" in API group \"\" at the cluster scope"
like image 883
Siner Avatar asked Dec 24 '18 03:12

Siner


People also ask

Can Kubernetes pod be on multiple nodes?

So, no, per definition, since all containers in a pod will be scheduled on the same node, a pod can not span nodes.

How many pods can run on a node in Kubernetes?

By default, GKE allows up to 110 Pods per node on Standard clusters, however Standard clusters can be configured to allow up to 256 Pods per node. Autopilot clusters have a maximum of 32 Pods per node. Kubernetes assigns each node a range of IP addresses, a CIDR block, so that each Pod can have a unique IP address.

Can we deploy a pod on particular node?

You can also schedule a pod to one specific node via setting nodeName . Use the configuration file to create a pod that will get scheduled on foo-node only.


1 Answers

The issue is due to your default service account doesn't have the permission to get the nodes or pods at the cluster scope. The minimum cluster role and cluster role binding to resolve that is:

apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
  name: prom-admin
rules:
# Just an example, feel free to change it
- apiGroups: [""]
  resources: ["pods", "nodes"]
  verbs: ["get", "watch", "list"]

---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
  name: prom-rbac
subjects:
- kind: ServiceAccount
  name: default
roleRef:
  kind: ClusterRole
  name: prom-admin
  apiGroup: rbac.authorization.k8s.io

The above cluster role provide permission to default service account to access any pods or nodes in any namespace.

You can change the cluster role to provide more permission to service account, if you want to grant access all permission to default service account then, replace resources: ["*"] in prom-admin

Hope this helps.

like image 131
Prafull Ladha Avatar answered Sep 19 '22 22:09

Prafull Ladha