I have 4 Kubernetes/Helm deployments (web, emailworker, jobworker, sync) which all need to share exactly the same spec.template.spec.containers[].env
key. The env keys are quite large and I'd like to avoid copy/pasting it in each deployment, e.g.:
# ...
env:
- name: NODE_ENV
value: "{{ .Values.node_env }}"
- name: BASEURL
value: "{{ .Values.base_url }}"
- name: REDIS_HOST
valueFrom:
secretKeyRef:
name: secret-redis
key: host
- name: KUE_PREFIX
value: "{{ .Values.kue_prefix }}"
- name: DATABASE_NAME
value: "{{ .Values.database_name }}"
- name: DATABASE_HOST
valueFrom:
secretKeyRef:
name: secret-postgres
key: host
- name: DATABASE_USER
valueFrom:
secretKeyRef:
name: secret-postgres
key: username
- name: DATABASE_PASSWORD
valueFrom:
secretKeyRef:
name: secret-postgres
key: password
- name: AWS_KEY
valueFrom:
secretKeyRef:
name: secret-bucket
key: key
- name: AWS_SECRET
valueFrom:
secretKeyRef:
name: secret-bucket
key: secret
- name: AWS_S3_BUCKET
valueFrom:
secretKeyRef:
name: secret-bucket
key: bucket
- name: AWS_S3_ENDPOINT
value: "{{ .Values.s3_endpoint }}"
- name: INSTAGRAM_CLIENT_ID
valueFrom:
secretKeyRef:
name: secret-instagram
key: clientID
# ...
Is this possible to achieve with either yaml, Helm or Kubernetes?
When you bind a Pod to a hostPort , it limits the number of places the Pod can be scheduled, because each < hostIP , hostPort , protocol > combination must be unique. If you don't specify the hostIP and protocol explicitly, Kubernetes will use 0.0. 0.0 as the default hostIP and TCP as the default protocol .
There are two ways to define environment variables with Kubernetes: by setting them directly in a configuration file, from an external configuration file, using variables, or a secrets file.
You can run kubectl scale --replicas=0, which will remove all the containers across the selected objects. You can scale back up again by repeating the command with a positive value.
To resolve it, double check the pod specification and ensure that the repository and image are specified correctly. If this still doesn't work, there may be a network issue preventing access to the container registry. Look in the describe pod text file to obtain the hostname of the Kubernetes node.
So I found a solution with Helm named templates: https://github.com/kubernetes/helm/blob/master/docs/chart_template_guide/named_templates.md
I created a file templates/_env.yaml
with the following content:
{{ define "env" }}
- name: NODE_ENV
value: "{{ .Values.node_env }}"
- name: BASEURL
value: "{{ .Values.base_url }}"
- name: REDIS_HOST
valueFrom:
secretKeyRef:
name: secret-redis
key: host
- name: KUE_PREFIX
value: "{{ .Values.kue_prefix }}"
- name: DATABASE_NAME
value: "{{ .Values.database_name }}"
- name: DATABASE_HOST
valueFrom:
secretKeyRef:
name: secret-postgres
key: host
- name: DATABASE_USER
valueFrom:
secretKeyRef:
name: secret-postgres
key: username
- name: DATABASE_PASSWORD
valueFrom:
secretKeyRef:
name: secret-postgres
key: password
- name: AWS_KEY
valueFrom:
secretKeyRef:
name: secret-bucket
key: key
- name: AWS_SECRET
valueFrom:
secretKeyRef:
name: secret-bucket
key: secret
- name: AWS_S3_BUCKET
valueFrom:
secretKeyRef:
name: secret-bucket
key: bucket
- name: AWS_S3_ENDPOINT
value: "{{ .Values.s3_endpoint }}"
- name: INSTAGRAM_CLIENT_ID
valueFrom:
secretKeyRef:
name: secret-instagram
key: clientID
{{ end }}
And here's how I use it in a templates/deployment.yaml
files:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: somedeployment
# ...
spec:
template:
# ...
metadata:
name: somedeployment
spec:
# ...
containers:
- name: container-name
image: someimage
# ...
env:
{{- template "env" . }}
Have a look at ConfigMap
. That allows configuration to be collected together in one resource and used in multiple deployments.
No need to mess around with any templates.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With