We have an AKS cluster and sometimes we end up with an issue where a deployment needs a restart (e.g. cached data has been updated and we want to refresh it or there is corrupt cache data we want to refresh).
I've been using the approach of scaling the deployment to 0 and then scaling it back up using the commands below:
kubectl scale deployments/<deploymentName> --replicas=0
kubectl scale deployments/<deploymentName> --replicas=1
This does what I expect it to do, but it feels hacky and it means we're not running any deployments while this process is taking place.
What's a better approach to doing this? For either a specific deployment and for all the deployments?
1. Run the kubectl set env command below to update the deployment by setting the DATE environment variable in the pod with a null value ( =$() ). As soon as you update the deployment, the pods will restart.
Rolling Restart MethodKubernetes now allows you to execute a rolling restart of your deployment as of version 1.15. This is the quickest restart mechanism in Kubernetes, as it is a new addition. The command given above shuts down and restarts each container in your deployment one by one.
On current Kubernetes, you can kubectl rollout restart deployment .... If you have a strategy of RollingUpdate on your deployments you can delete the pods in order to replace the pod and refresh it. Users expect applications to be available all the time and developers are expected to deploy new versions of them several times a day.
kubectl get deployments -n <NameSpace Name> -o custom-columns=NAME:.metadata.name|grep -iv NAME|while read LINE; do kubectl rollout restart deployment $LINE -n <NameSpace Name> ; done; You just have to replace the Namespace value after -n in the command it would traverse through the list of deployments in the namespace and restart them for you.
How to rolling restart pods without changing deployment yaml in kubernetes? On current Kubernetes, you can kubectl rollout restart deployment .... If you have a strategy of RollingUpdate on your deployments you can delete the pods in order to replace the pod and refresh it.
Deployment is there to ensure Pod restarts when it gets evicted by DiskPressureEviction . The problem I'm facing is caused by Deployment retrying to restart the Pod too fast. As the Pod is set to be in specific Node that hasn't cleaned up DiskPressure yet, restarting Pod fails sequentially before Node is ready to accept new Pod:
If you have a strategy of RollingUpdate
on your deployments you can delete the pods in order to replace the pod and refresh it.
About the RollingUpdate strategy:
Users expect applications to be available all the time and developers are expected to deploy new versions of them several times a day. In Kubernetes this is done with rolling updates. Rolling updates allow Deployments' update to take place with zero downtime by incrementally updating Pods instances with new ones.
RollingUpdate config:
spec:
strategy:
rollingUpdate:
maxSurge: 1
maxUnavailable: 0
type: RollingUpdate
maxSurge
specifies the maximum number of Pods that can be created over the desired number of Pods.
maxUnavailable
specifies the maximum number of Pods that can be unavailable during the update process.
Delete the pod:
kubectl delete pod <pod-name>
Edit:
Also, you can rollout the deployment, which will restart the pod but will create a new revision of the deployment as well.
Ex: kubectl rollout restart deployments/<deployment-name>
How to restart all deployments in a cluster (multiple namespaces):
kubectl get deployments --all-namespaces | tail +2 | awk '{ cmd=sprintf("kubectl rollout restart deployment -n %s %s", $1, $2) ; system(cmd) }'
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With