Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Restart pods when configmap updates in Kubernetes?

People also ask

Do we need to restart pod after ConfigMap change?

Updating the config map in Kubernetes POD requires the restart of POD. A possible approach to auto-update configmap inside pod without restart is mounting it's content as volume.

Does Kubernetes restart pods?

As of kubernetes 1.15, you can now do a rolling restart of all pods for a deployment, so that you don't take the service down.

Can ConfigMap be updated?

Mounted ConfigMaps are updated automatically The kubelet checks whether the mounted ConfigMap is fresh on every periodic sync. However, the kubelet uses its local cache for getting the current value of the ConfigMap.


The current best solution to this problem (referenced deep in https://github.com/kubernetes/kubernetes/issues/22368 linked in the sibling answer) is to use Deployments, and consider your ConfigMaps to be immutable.

When you want to change your config, create a new ConfigMap with the changes you want to make, and point your deployment at the new ConfigMap. If the new config is broken, the Deployment will refuse to scale down your working ReplicaSet. If the new config works, then your old ReplicaSet will be scaled to 0 replicas and deleted, and new pods will be started with the new config.

Not quite as quick as just editing the ConfigMap in place, but much safer.


Signalling a pod on config map update is a feature in the works (https://github.com/kubernetes/kubernetes/issues/22368).

You can always write a custom pid1 that notices the confimap has changed and restarts your app.

You can also eg: mount the same config map in 2 containers, expose a http health check in the second container that fails if the hash of config map contents changes, and shove that as the liveness probe of the first container (because containers in a pod share the same network namespace). The kubelet will restart your first container for you when the probe fails.

Of course if you don't care about which nodes the pods are on, you can simply delete them and the replication controller will "restart" them for you.


The best way I've found to do it is run Reloader

It allows you to define configmaps or secrets to watch, when they get updated, a rolling update of your deployment is performed. Here's an example:

You have a deployment foo and a ConfigMap called foo-configmap. You want to roll the pods of the deployment every time the configmap is changed. You need to run Reloader with:

kubectl apply -f https://raw.githubusercontent.com/stakater/Reloader/master/deployments/kubernetes/reloader.yaml

Then specify this annotation in your deployment:

kind: Deployment
metadata:
  annotations:
    configmap.reloader.stakater.com/reload: "foo-configmap"
  name: foo
...

Helm 3 doc page

Often times configmaps or secrets are injected as configuration files in containers. Depending on the application a restart may be required should those be updated with a subsequent helm upgrade, but if the deployment spec itself didn't change the application keeps running with the old configuration resulting in an inconsistent deployment.

The sha256sum function can be used together with the include function to ensure a deployments template section is updated if another spec changes:

kind: Deployment
spec:
  template:
    metadata:
      annotations:
        checksum/config: {{ include (print $.Template.BasePath "/secret.yaml") . | sha256sum }}
[...]

In my case, for some reasons, $.Template.BasePath didn't work but $.Chart.Name does:

spec:
  replicas: 1
  template:
    metadata:
      labels:
        app: admin-app
      annotations:
        checksum/config: {{ include (print $.Chart.Name "/templates/" $.Chart.Name "-configmap.yaml") . | sha256sum }}