When you use minikube, it automatically creates the local configurations, so it's ready to use. And it appears there is support for multiple clusters in the kubectl command based on the reference for kubectl config.
In the docs for setting up clusters, there's a reference to copying the relevant files to your local machine to access the cluster. I also found an SO Q&A about editing your .kube/config to leverage azure remotely that talked to editing the kube/config file.
It looks like the environment variable $KUBECONFIG
can reference multiple locations of these configuration files, with the built-in default being ~/.kube/config
(which is what minikube creates).
If I want to be able to use kubectl to invoke commands to multiple clusters, should I download the relevant config file into a new location (for example into ~/gcloud/config
, set the KUBECONFIG
environment variable to reference both locations?
Or is it better to just explicitly use the --kubeconfig
option when invoking kubectl to specify a configuration for the cluster?
I wasn't sure if there was some way of merging the configuration files that would be better, and leverage the kubectl config set-context
or kubectl config set-cluster
commands instead. The documentation at Kubernetes on "Configure Access to Multiple Clusters" seems to imply a different means of using --kubeconfig
along with these kubectl config
commands.
In short, what's the best way to interact with multiple separate kubernetes clusters and what are the tradeoffs?
If I want to be able to use
kubectl
to invoke commands to multiple clusters, should I download the relevant config file into a new location (for example into~/gcloud/config
, set theKUBECONFIG
environment variable to reference both locations?Or is it better to just explicitly use the --kubeconfig option when invoking kubectl to specify a configuration for the cluster?
That would probably depend on the approach you find simpler and more convenient, and if having security and access management concerns in mind is needed.
From our experience merging various kubeconfig
files is very useful for multi-cluster operations, in order to carry out maintenance tasks, and incident management over a group of clusters (contexts & namespaces) simplifying troubleshooting issues based on the possibility to compare configs, manifests, resources and states of K8s services, pods, volumes, namespaces, rs, etc.
However, when automation and deployment (w/ tools like Jenkins, Spinnaker or Helm) are involved most likely having separate kubeconfig
files could be a good idea. A hybrid approach can be merging kubeconfig
files based on a division by Service tier -> Using files to partition development landscapes (dev, qa, stg, prod) clusters or for Teams -> Roles and Responsibilities in an Enterprise (teamA, teamB, …, teamN) can be understood also within good alternatives.
For multi-cluster merged kubeconfig
files scenarios consider kubectx + kubens, which are very powerful tools for kubectlt
that let you see the current context (cluster) and namespace, likewise to switch between them.
In short, what's the best way to interact with multiple separate kubernetes clusters and what are the trade offs?
The trade offs should possibly be analyzed considering the most important factors for your project. Having a single merged kubeconfig
file seems simpler, even simple if you merge it with ~/.kube/config
to be used by default by kubectl
and just switching between cluster/namespaces with --context kubectl
flag. On the other hand if limiting the scope of the kubeconfig
is a must, having them segregated and using --kubeconfig=file1
sounds like the best way to go.
Probably there is NOT a best way for every case and scenario, knowing how to configure kubeconfig
file knowing its precedence will help though.
In this article -> https://www.nrmitchi.com/2019/01/managing-kubeconfig-files/ you'll find a complementary and valuable opinion:
While having all of the contexts you may need in one file is nice, it
is difficult to maintain, and seldom the default case. Multiple tools
which provide you with access credentials will provide a fresh
kubeconfig
to use. While you can merge the configs together into
~/.kube/config
, it is manual, and makes removing contexts more
difficult (having to explicitly remove the context, cluster, and
user). There is an open issue in Kubernetes tracking this. However by
keeping each provided config file separate, and just loading all of
them, removal is much easier (just remove the file). To me, this
seems like a much more manageable approach.
I prefer to keep all individual config files under ~/.kube/configs, and by taking advantage of the multiple-path aspect of the $KUBECONFIG environment variable option, we can make this happen.
If you’re using kubectl
, here’s the preference that takes effect while determining which kubeconfig file is used.
--kubeconfig
flag, if specifiedKUBECONFIG
environment variable, if specified$HOME/.kube/config
fileWith this, you can easily override kubeconfig file you use per the kubectl
command:
#
# using --kubeconfig flag
#
kubectl get pods --kubeconfig=file1
kubectl get pods --kubeconfig=file2
#
# or
# using `KUBECONFIG` environment variable
#
KUBECONFIG=file1 kubectl get pods
KUBECONFIG=file2 kubectl get pods
#
# or
# merging your kubeconfig file w/ $HOME/.kube/config (w/ cp backup)
#
cp $HOME/.kube/config $HOME/.kube/config.backup.$(date +%Y-%m-%d.%H:%M:%S)
KUBECONFIG= $HOME/.kube/config:file2:file3 kubectl config view --merge --flatten > \
~/.kube/merged_kubeconfig && mv ~/.kube/merged_kubeconfig ~/.kube/config
kubectl get pods --context=cluster-1
kubectl get pods --context=cluster-2
NOTE: The --minify
flag allows us to extract only info about that context, and the --flatten
flag allows us to keep the credentials unredacted.
You can save AKS (Azure Container Service), or AWS EKS (Elastic Container Service for K8s) or GKE (Google Container Engine) cluster contexts to separate files and set the KUBECONFIG
env var to reference both file locations.
For instance, when you create a GKE cluster (or retrieve its credentials) through the gcloud
command, it normally modifies your default ~/.kube/config
file. However, you can set $KUBECONFIG
for gcloud
to save cluster credentials to a file:
KUBECONFIG=c1.yaml gcloud container clusters get-credentials "cluster-1"
Then as we mentioned before using multiple kubeconfigs
at once can be very useful to work with multiple contexts at the same time.
To do that, you need a “merged” kubeconfig file. In the section "Merging kubeconfig files" below, we explain how you can merge the kubeconfigs into a single file, but you can also merge them in-memory.
By specifying multiple files in KUBECONFIG
environment variable, you can temporarily stitch kubeconfig files together and use them all in kubectl
.
#
# Kubeconfig in-memory merge
#
export KUBECONFIG=file1:file2
kubectl get pods --context=cluster-1
kubectl get pods --context=cluster-2
#
# For your example
# merging your kubeconfig file w/ $HOME/.kube/config (w/ cp backup)
#
cp $HOME/.kube/config $HOME/.kube/config.backup.$(date +%Y-%m-%d.%H:%M:%S)
KUBECONFIG= $HOME/.kube/config:file2: kubectl config view --merge --flatten > \
~/.kube/merged_kubeconfig && mv ~/.kube/merged_kubeconfig ~/.kube/config
kubectl get pods --context=cluster-1
kubectl get pods --context=cluster-2
Since kubeconfig files are structured YAML files, you can’t just append them to get one big kubeconfig file, but kubectl
can help you merge these files:
#
# Merging your kubeconfig file w/ $HOME/.kube/config (w/ cp backup)
#
cp $HOME/.kube/config $HOME/.kube/config.backup.$(date +%Y-%m-%d.%H:%M:%S)
KUBECONFIG=$HOME/.kube/config:file2:file3 kubectl config view --merge --flatten > \
~/.kube/merged_kubeconfig && mv ~/.kube/merged_kubeconfig ~/.kube/config
kubectl get pods --context=cluster-1
kubectl get pods --context=cluster-2
I have a series of shell functions that boil down to kubectl --context=$CTX --namespace=$NS
, allowing me to contextualize each shell [1]. But if you are cool with that approach, then rather than rolling your own, https://github.com/Comcast/k8sh will likely interest you. I just wish it was shell functions instead of a sub-shell
But otherwise, yes, I keep all the config values in the one ~/.kube/config
footnote 1: if you weren't already aware, one can also change the title of terminal windows via title() { printf '\033]0;%s\007' "$*"; }
which I do in order to remind me which cluster/namespace/etc is in effect for that tab/window
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With