Is it possible to restart pods automatically based on the time?
For example, I would like to restart the pods of my cluster every morning at 8.00 AM.
But in the final approach, once you update the pod's environment variable, the pods automatically restart by themselves. 1. Run the kubectl set env command below to update the deployment by setting the DATE environment variable in the pod with a null value ( =$() ).
If pods are managed by a replication controller or replication set you can kill the pods and they'll be restarted automatically.
Method 2: kubectl rollout restart Method 1 is a quicker solution, but the simplest way to restart Kubernetes pods is using the rollout restart command. The controller kills one pod at a time, relying on the ReplicaSet to scale up new pods until all of them are newer than the moment the controller resumed.
The spec of a Pod has a restartPolicy field with possible values Always, OnFailure, and Never. The default value is Always. The restartPolicy applies to all containers in the Pod.
Use a cronjob, but not to run your pods, but to schedule a Kubernetes API command that will restart the deployment everyday (kubectl rollout restart
). That way if something goes wrong, the old pods will not be down or removed.
Rollouts create new ReplicaSets, and wait for them to be up, before killing off old pods, and rerouting the traffic. Service will continue uninterrupted.
You have to setup RBAC, so that the Kubernetes client running from inside the cluster has permissions to do needed calls to the Kubernetes API.
--- # Service account the client will use to reset the deployment, # by default the pods running inside the cluster can do no such things. kind: ServiceAccount apiVersion: v1 metadata: name: deployment-restart namespace: <YOUR NAMESPACE> --- # allow getting status and patching only the one deployment you want # to restart apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: name: deployment-restart namespace: <YOUR NAMESPACE> rules: - apiGroups: ["apps", "extensions"] resources: ["deployments"] resourceNames: ["<YOUR DEPLOYMENT NAME>"] verbs: ["get", "patch", "list", "watch"] # "list" and "watch" are only needed # if you want to use `rollout status` --- # bind the role to the service account apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: deployment-restart namespace: <YOUR NAMESPACE> roleRef: apiGroup: rbac.authorization.k8s.io kind: Role name: deployment-restart subjects: - kind: ServiceAccount name: deployment-restart namespace: <YOUR NAMESPACE>
And the cronjob specification itself:
apiVersion: batch/v1beta1 kind: CronJob metadata: name: deployment-restart namespace: <YOUR NAMESPACE> spec: concurrencyPolicy: Forbid schedule: '0 8 * * *' # cron spec of time, here, 8 o'clock jobTemplate: spec: backoffLimit: 2 # this has very low chance of failing, as all this does # is prompt kubernetes to schedule new replica set for # the deployment activeDeadlineSeconds: 600 # timeout, makes most sense with # "waiting for rollout" variant specified below template: spec: serviceAccountName: deployment-restart # name of the service # account configured above restartPolicy: Never containers: - name: kubectl image: bitnami/kubectl # probably any kubectl image will do, # optionaly specify version, but this # should not be necessary, as long the # version of kubectl is new enough to # have `rollout restart` command: - 'kubectl' - 'rollout' - 'restart' - 'deployment/<YOUR DEPLOYMENT NAME>'
Optionally, if you want the cronjob to wait for the deployment to roll out, change the cronjob command to:
command: - bash - -c - >- kubectl rollout restart deployment/<YOUR DEPLOYMENT NAME> && kubectl rollout status deployment/<YOUR DEPLOYMENT NAME>
Another quick and dirty option for a pod that has a restart policy of Always (which cron jobs are not supposed to handle - see creating a cron job spec pod template) is a livenessProbe that simply tests the time and restarts the pod on a specified schedule
ex. After startup, wait an hour, then check hour every minute, if hour is 3(AM) fail probe and restart, otherwise pass
livenessProbe: exec: command: - exit $(test $(date +%H) -eq 3 && echo 1 || echo 0) failureThreshold: 1 initialDelaySeconds: 3600 periodSeconds: 60
Time granularity is up to how you return the date and test ;)
Of course this does not work if you are already utilizing the liveness probe as an actual liveness probe ¯\_(ツ)_/¯
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With