Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Kubernetes - different settings per environment

Tags:

kubernetes

We have an app that runs on GKE Kubernetes and which expects an auth url (to which user will be redirected via his browser) to be passed as environment variable.

We are using different namespaces per environment

So our current pod config looks something like this:

  env:
    - name: ENV
      valueFrom:
        fieldRef:
          fieldPath: metadata.namespace
    - name: AUTH_URL
      value: https://auth.$(ENV).example.org 

And all works amazingly, we can have as many dynamic environments as we want, we just do apply -f config.yaml and it works flawlessly without changing a single config file and without any third party scripts.

Now for production we kind of want to use different domain, so the general pattern https://auth.$(ENV).example.org does not work anymore.

What options do we have?

  1. Since configs are in git repo, create a separate branch for prod environment
  2. Have a default ConfigMap and a specific one for prod environment, and run it via some script (if exists prod-config.yaml then use that, else use config.yaml) - but with this approach we cannot use kubectl directly anymore
  3. Move this config to application level, and have separate config file for prod env - but this kind of goes against 12factor app?
  4. Other...?
like image 207
gerasalus Avatar asked Feb 04 '23 16:02

gerasalus


2 Answers

This seems like an ideal opportunity to use helm!

It's really easy to get started, simply install tiller into your cluster.

Helm gives you the ability to create "charts" (which are like packages) which can be installed into your cluster. You can template these really easily. As an example, you might have you config.yaml look like this:

env:
  - name: AUTH_URL
    value: {{ .Values.auth.url }} 

Then, within the helm chart you have a values.yaml which contains defaults for the url, for example:

auth:
  url: https://auth.namespace.example.org

You can use the --values option with helm to specify per environment values.yaml files, or even use the --set flag on helm to override them when using helm install.

Take a look at the documentation here for information about how values and templating works in helm. It seems perfect for your use case

like image 146
jaxxstorm Avatar answered Feb 11 '23 23:02

jaxxstorm


jaxxstorms' answer is helpful, I just want to add what that means to the options you proposed:

  1. Since configs are in git repo, create a separate branch for prod environment.

I would not recommend separate branches in GIT since the purpose of branches is to allow for concurrent editing of the same data, but what you have is different data (different configurations for the cluster).

  1. Have a default ConfigMap and a specific one for prod environment, and run it via some script (if exists prod-config.yaml then use that, else use config.yaml) - but with this approach we cannot use kubectl directly anymore

Using Helm will solve this more elegantly. Instead of a script you use helm to generate the different files for different environments. And you can use kubectl (using the final files, which I would also check into GIT btw.).

  1. Move this config to application level, and have separate config file for prod env - but this kind of goes against 12factor app?

This is a matter of opinion but I would recommend in general to split up the deployments by applications and technologies. For example when I deploy a cluster that runs 3 different applications A B and C and each application requires a Nginx, CockroachDB and Go app-servers then I'll have 9 configuration files, which allows me to separately deploy or update each of the technologies in the app context. This is important for allowing separate deployment actions in a CI server such as Jenkins and follows general separation of concerns.

  1. Other...?

See jaxxstorms' answer about Helm.

like image 33
Oswin Noetzelmann Avatar answered Feb 11 '23 22:02

Oswin Noetzelmann