I want to learn how to update secrets in worker pods without killing and recreating the deployment.
Currently pods pull in secrets to as env vars with:
env:
- name: SECRET_ACCESS_KEY
valueFrom:
secretKeyRef:
key: secret_access_key
name: secrets
but this only happens when they startup.
So if there is a need to change a secret I have to:
secrets.yaml
kubectl apply -f secrets.yaml
kubectl delete -f worker-deployment.yaml
kubectl apply -f worker-deployment.yaml
I really don't like step 3 and 4 as they terminate jobs in progress.
What is a better workflow for updating env var secrets in place?
There is no way to do a "hot reload" for pod's environment variables.
Although, you do not need to delete and recreate the deployment again to apply the new secret value. You only need to recreate the underlying pods. Some options are:
kubectl delete pods
to recreate themterminationGracePeriodSeconds
from 30
to 31
).kubectl rollout restart
to do a rolling restart on the deployment †† rollout restart
is only available on kubernetes v1.15+
As already mentioned, what you want to do is not possible. However, there is an alternative offered by Kubernetes: mounting ConfigMaps as Volumes. For example
apiVersion: v1
kind: Pod
metadata:
name: configmap-pod
spec:
containers:
- name: test
image: busybox
volumeMounts:
- name: config-vol
mountPath: /etc/config
volumes:
- name: config-vol
configMap:
name: log-config
items:
- key: log_level
path: log_level
In this case, the log-config
ConfigMap
would be mounted as a Volume
, and you could access the contents from its log_level
entry as the file “/etc/config/log_level
” inside the pod.
Changes to the config map are reflected by changes in the files on the Volume, and those changes can, in turn, be watched by your application by using the appropriate functionality in your language.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With