Hi I saw this documentation where kubectl can run inside a pod in the default pod. Is it possible to run kubectl inside a Job resource in a specified namespace? Did not see any documentation or examples for the same..
When I tried adding serviceAccounts to the container i got the error:
Error from server (Forbidden): pods is forbidden: User "system:serviceaccount:my-namespace:internal-kubectl" cannot list resource "pods" in API group "" in the namespace "my-namespace"
This was when i tried sshing into the container and running the kubctl.
Editing the question.....
As I mentioned earlier, based on the documentation I had added the service Accounts, Below is the yaml:
apiVersion: v1
kind: ServiceAccount
metadata:
name: internal-kubectl
namespace: my-namespace
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: modify-pods
namespace: my-namespace
rules:
- apiGroups: [""]
resources:
- pods
verbs:
- get
- list
- delete
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: modify-pods-to-sa
namespace: my-namespace
subjects:
- kind: ServiceAccount
name: internal-kubectl
roleRef:
kind: Role
name: modify-pods
apiGroup: rbac.authorization.k8s.io
---
apiVersion: batch/v1
kind: Job
metadata:
name: testing-stuff
namespace: my-namespace
spec:
template:
metadata:
name: testing-stuff
spec:
serviceAccountName: internal-kubectl
containers:
- name: tester
image: bitnami/kubectl
command:
- "bin/bash"
- "-c"
- "kubectl get pods"
restartPolicy: Never
On running the job, I get the error:
Error from server (Forbidden): pods is forbidden: User "system:serviceaccount:my-namespace:internal-kubectl" cannot list resource "pods" in API group "" in the namespace "my-namespace"
If you would like to query the API without an official client library, you can run kubectl proxy as the command of a new sidecar container in the Pod. This way, kubectl proxy will authenticate to the API and expose it on the localhost interface of the Pod, so that other containers in the Pod can use it directly.
By default, the kubectl command-line tool interacts with the default namespace. If you want to use a different namespace, you can pass kubectl the --namespace flag. For example, kubectl --namespace=mystuff references objects in the mystuff namespace.
Working with Kubernetes Namespaces Because this can be time-consuming, the default namespace can be modified by using the kubectl config command to set the namespace in the cluster context. To switch from the default namespace to 'K21,' for example, type: kubectl config set-context –current –namespace=K21.
Is it possible to run kubectl inside a Job resource in a specified namespace? Did not see any documentation or examples for the same..
A Job creates one or more Pods and ensures that a specified number of them successfully terminate. It means the permission aspect is the same as in a normal pod, meaning that yes, it is possible to run kubectl inside a job resource.
TL;DR:
Security Considerations:
ClusterRoleBinding
with the cluster-admin
role it will work, but it's over permissioned, and not recommended since it's giving full admin control over the entire cluster.Test Environment:
bitnami/kubectl
and bitnami/kubectl:1:17.3
. It worked on both cases.kubectl
with matching version with your server.Reproduction:
$ cat job-kubectl.yaml
apiVersion: batch/v1
kind: Job
metadata:
name: testing-stuff
namespace: my-namespace
spec:
template:
metadata:
name: testing-stuff
spec:
serviceAccountName: internal-kubectl
containers:
- name: tester
image: bitnami/kubectl:1.17.3
command:
- "bin/bash"
- "-c"
- "kubectl get pods -n my-namespace"
restartPolicy: Never
$ cat job-svc-account.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: internal-kubectl
namespace: my-namespace
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: modify-pods
namespace: my-namespace
rules:
- apiGroups: [""]
resources: ["pods"]
verbs: ["get", "list", "delete"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: modify-pods-to-sa
namespace: my-namespace
subjects:
- kind: ServiceAccount
name: internal-kubectl
roleRef:
kind: Role
name: modify-pods
apiGroup: rbac.authorization.k8s.io
get pods
.$ kubectl run curl --image=radial/busyboxplus:curl -i --tty --namespace my-namespace
the pod is running
$ kubectl run ubuntu --generator=run-pod/v1 --image=ubuntu -n my-namespace
pod/ubuntu created
job
, ServiceAccount
, Role
and RoleBinding
$ kubectl get pods -n my-namespace
NAME READY STATUS RESTARTS AGE
curl-69c656fd45-l5x2s 1/1 Running 1 88s
testing-stuff-ddpvf 0/1 Completed 0 13s
ubuntu 0/1 Completed 3 63s
$ kubectl logs testing-stuff-ddpvf -n my-namespace
NAME READY STATUS RESTARTS AGE
curl-69c656fd45-l5x2s 1/1 Running 1 76s
testing-stuff-ddpvf 1/1 Running 0 1s
ubuntu 1/1 Running 3 51s
As you can see, it has succeeded running the job with the custom ServiceAccount
.
Let me know if you have further questions about this case.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With