Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Kubernetes python client: authentication issue

We are using the kubernetes python client (4.0.0) in combination with google's kubernetes engine (master + nodepools run k8s 1.8.4) to periodically schedule workloads on kubernetes. The simplified version of the script we use to creates the pod, attach to the the logs and report the end status of the pod looks as follows:

config.load_kube_config(persist_config=False)
v1 = client.CoreV1Api()
v1.create_namespaced_pod(body=pod_specs_dict, namespace=args.namespace)
logging_response = v1.read_namespaced_pod_log(
    name=pod_name,
    namespace=args.namespace,
    follow=True,
    _preload_content=False
)
for line in logging_response:
    line = line.rstrip()
    logging.info(line)
status_response = v1.read_namespaced_pod_status(pod_name, namespace=args.namespace)
print("Pod ended in status: {}".format(status_response.status.phase))

Everything works pretty fine, however we are experiencing some authentication issues. Authentication happens through the default gcp auth-provider, for which I obtained the initial access token by running a kubectl container cluster get-credentials manually on the scheduler. At some random timeframes, some API calls result in a 401 response from the API server. My guess is that this happens whenever the access token is expired, and the script tries to obtain a new access token. However it happens that multiple scripts are running concurrently on the scheduler, resulting in obtaining a new API key multiple times of which only one is still valid. I tried out multiple ways to fix the issue (use persist_config=True, retry 401's after reloading the config,...) without any success. As I am not completely aware how the gcp authentication and the kubernetes python client config work (and docs for both are rather scarce), I am a bit left in the dark.

Should we use another authentication method instead of the gcp auth-provider? Is this a bug in the kubernetes python client? Should we use multiple config files?

like image 745
krelst Avatar asked Jan 08 '18 13:01

krelst


2 Answers

In the end we have solved this by using bearer token authentication, instead of relying on the default gcloud authentication method.

Here are the steps that I did to achieve this.

First create a service account in the desired namespace, by creating a file with the following content.

apiVersion: v1
kind: ServiceAccount
metadata:
  name: <name_of_service_account>

Then use this file to create the service account

kubectl create -f <path_to_file> --namespace=<namespace_name>

Each service account has a bearer token linked to it, which can be used for authentication. This bearer token is automatically mounted as a secret into the namespace. To find out what this token is, first find the name of the secret (is of the form <service_account_name>-token-<random_string>) and then use that name to get to content.

# To search for out service account's token name
kubectl get secrets --namespace=<namespace_name>

# To find the token name
kubectl describe secret/<secret_name>

After this you should find out the ip address of the API server, and the Cluster CA certificate of the kubernetes cluster. This can be done by going to the kubernetes engine detail page on google cloud console. Copy the content of the certificate into a local file.

You can now use the bearer token to authenticate via the kubernetes python client, as follows:

from kubernetes import client

configuration = client.Configuration()
configuration.api_key["authorization"] = '<bearer_token>'
configuration.api_key_prefix['authorization'] = 'Bearer'
configuration.host = 'https://<ip_of_api_server>'
configuration.ssl_ca_cert = '<path_to_cluster_ca_certificate>'

v1 = client.CoreV1Api(client.ApiClient(configuration))
like image 144
krelst Avatar answered Nov 17 '22 02:11

krelst


I have a python container using the Kubernetes client, and was looking for a way to have it use a service account when executing in cluster, but load a mounted kube config when executing locally. It took me a while to find load_incluster_config(), which will automatically configure based on the service account of the container when executing in cluster. I now switch on an env var when running locally. This might be helpful for you :

https://github.com/kubernetes-client/python/blob/master/examples/in_cluster_config.py

like image 25
sporkthrower Avatar answered Nov 17 '22 04:11

sporkthrower