I have 3 nodes in kubernetes cluster. I create a daemonset
and deployed it in all the 3 devices. This daemonset
created 3 pods and they were successfully running. But for some reasons, one of the pod failed.
I need to know how can we restart this pod without affecting other pods in the daemon set, also without creating any other daemon set deployment?
Thanks
If Pod's status is Failed , Kubernetes will try to create new Pods until it reaches terminated-pod-gc-threshold in kube-controller-manager . This will leave many Failed Pods in a cluster and need to be cleaned up.
kubectl delete pod <podname>
it will delete this one pod and Deployment/StatefulSet/ReplicaSet/DaemonSet will reschedule a new one in its place
There are other possibilities to acheive what you want:
Just use rollout
command
kubectl rollout restart deployment mydeploy
You can set some environment variable which will force your deployment pods to restart:
kubectl set env deployment mydeploy DEPLOY_DATE="$(date)"
You can scale your deployment to zero, and then back to some positive value
kubectl scale deployment mydeploy --replicas=0 kubectl scale deployment mydeploy --replicas=1
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With