Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Best practices for storing kubernetes configuration in source control [closed]

In several places on the Kubernetes documentation site they recommend that you store your configuration YAML files inside source control for easy version-tracking, rollback, and deployment.

My colleagues and I are currently in the process of trying to decide on the structure of our git repository.

  • We have decided that since configuration can change without any changes to the app code, that we would like to store configurations in a separate, shared repository.
  • We may need multiple versions of some components running side-by-side within a given environment (cluster). These versions may have different configurations.

There seem to be a lot of potential variations, and all of them have shortcomings. What is the accepted way to structure such a repository?

like image 741
Kir Avatar asked Nov 07 '17 22:11

Kir


People also ask

Where is Kubernetes config stored?

Configuration files are typically stored in source control, such as Git. live object configuration / live configuration: The live configuration values of an object, as observed by the Kubernetes cluster. These are kept in the Kubernetes cluster storage, typically etcd.

Where do I put Kubernetes YAML files?

As mentioned above, using the YAML field allows you to declaratively manage your Kubernetes applications. These YAML files can be stored in a common directory and may all be applied using kubectl apply -f <directory>.

How do I organize my Kubernetes files?

So here it is, my recommendations when it comes to organizing your Kubernetes manifests: Group manifest files in directories named after the Kind of object: deployments , configmaps , services , etc. Note the directory name lower cased and pluralized.

What happens if you bind a pod to a hostPort?

When you bind a Pod to a hostPort , it limits the number of places the Pod can be scheduled, because each < hostIP , hostPort , protocol > combination must be unique. If you don't specify the hostIP and protocol explicitly, Kubernetes will use 0.0.


2 Answers

There is no established standard yet, I believe. I find helm's charts too complicated to start with, especially having another unmanaged component running on the k8s cluster. This is a workflow that we follow that works quite well for a setup of 15ish microservices, and 5 different environments (devx2, staging, qa, prod).

The 2 key ideas:

  1. Store kubernetes configurations in the same source repo that has the other build tooling. Eg: alongside the microservice source code which has the tooling for building/releasing that particular microservice.
  2. Template the kubernetes configuration with something like jinja and render the templates according to the environment you're targeting.

The tooling is reasonably straightforward to figure out by putting together a few bash scripts or integrating with a Makefile etc.

EDIT: to answer some of the questions in the comment

The application source code repository is used as the single source of truth. So that means that if everything works as it should, changes should never be moved from the kubernetes cluster to the repository.

Changes directly on the server are prohibited in our workflow. If it ever does happen, we have to manually make sure they enter the application repository again.

Again, just want to note that the configurations stored in the source code are actually templates and use secretKeyRef quite liberally. This means that some configurations are coming in from the CI tooling as they are rendered and some are coming in from secrets that live only on the cluster (like database passwords, API tokens etc.).

like image 125
iamnat Avatar answered Sep 21 '22 15:09

iamnat


In my opinion Helm is to kubernetes as Docker-compose is to docker

There is no reason to fear helm, as in it's most basic functionality, all it does is similar to kubectl apply -f templates.

Once you get familiar with helm you can start using values.yaml and adding values into your kubernetes templates for maximum flexibility.

values.yaml

name: my-name

inside templates/deployment.yaml

name: {{ .Values.name }}

https://helm.sh/

Here are some approaches to using helm "infrastructure as code". Regardless of which approach you use, remember that you can also maintain a helm repository to distribute helm charts.

  1. Create a helm subdirectory in each project, the same way that you may include a docker-compose.yml file.

  2. Create a seperate helm repository for each chart and control it individually from the application code. This may be a better approach when code and infrastructure are managed by seperate teams.

  3. Store all helm charts in a central repository. This is useful for easily distributing your charts, but may cause confusion when many teams are working on different charts.

  4. If you want to have the benefits of method 3, with the clear ownership of method 2, you can use method 2 and instead create a git repo of submodules that pull from multiple chart repos which are maintained by appropriate owners.

like image 34
yosefrow Avatar answered Sep 19 '22 15:09

yosefrow