I'm using kubernetes on-prem
While I build gitlab using kubernetes has some problem. I think it's related with serviceaccount or role-binding. but couldn't find correct way
I found these posts
Kubernetes log, User "system:serviceaccount:default:default" cannot get services in the namespace
https://github.com/kubernetes/kops/issues/3551
==> /var/log/gitlab/prometheus/current <==
2018-12-24_03:06:08.88786 level=error ts=2018-12-24T03:06:08.887812767Z caller=main.go:240 component=k8s_client_runtime err="github.com/prometheus/prometheus/discovery/kubernetes/kubernetes.go:372: Failed to list *v1.Node: nodes is forbidden: User \"system:serviceaccount:default:default\" cannot list resource \"nodes\" in API group \"\" at the cluster scope"
2018-12-24_03:06:08.89075 level=error ts=2018-12-24T03:06:08.890719525Z caller=main.go:240 component=k8s_client_runtime err="github.com/prometheus/prometheus/discovery/kubernetes/kubernetes.go:320: Failed to list *v1.Pod: pods is forbidden: User \"system:serviceaccount:default:default\" cannot list resource \"pods\" in API group \"\" at the cluster scope"
So, no, per definition, since all containers in a pod will be scheduled on the same node, a pod can not span nodes.
By default, GKE allows up to 110 Pods per node on Standard clusters, however Standard clusters can be configured to allow up to 256 Pods per node. Autopilot clusters have a maximum of 32 Pods per node. Kubernetes assigns each node a range of IP addresses, a CIDR block, so that each Pod can have a unique IP address.
You can also schedule a pod to one specific node via setting nodeName . Use the configuration file to create a pod that will get scheduled on foo-node only.
The issue is due to your default service account doesn't have the permission to get the nodes or pods at the cluster scope. The minimum cluster role and cluster role binding to resolve that is:
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
name: prom-admin
rules:
# Just an example, feel free to change it
- apiGroups: [""]
resources: ["pods", "nodes"]
verbs: ["get", "watch", "list"]
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: prom-rbac
subjects:
- kind: ServiceAccount
name: default
roleRef:
kind: ClusterRole
name: prom-admin
apiGroup: rbac.authorization.k8s.io
The above cluster role provide permission to default service account to access any pods or nodes in any namespace.
You can change the cluster role to provide more permission to service account, if you want to grant access all permission to default service account then, replace resources: ["*"]
in prom-admin
Hope this helps.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With