I tried to delete a ReplicationController
with 12 pods and I could see that some of the pods are stuck in Terminating
status.
My Kubernetes cluster consists of one control plane node and three worker nodes installed on Ubuntu virtual machines.
What could be the reason for this issue?
NAME READY STATUS RESTARTS AGE pod-186o2 1/1 Terminating 0 2h pod-4b6qc 1/1 Terminating 0 2h pod-8xl86 1/1 Terminating 0 1h pod-d6htc 1/1 Terminating 0 1h pod-vlzov 1/1 Terminating 0 1h
A pod has been deleted, and remains in a status of Terminated for more than a few seconds. This can happen because: the pod has a finalizer associated with it that is not completing, or. the pod is not responding to termination signals.
Restart Pods in Kubernetes with the rollout restart CommandBy running the rollout restart command. Run the rollout restart command below to restart the pods one by one without impacting the deployment ( deployment nginx-deployment ). Now run the kubectl command below to view the pods running ( get pods ).
Terminated. A container in the Terminated state began execution and then either ran to completion or failed for some reason. When you use kubectl to query a Pod with a container that is Terminated , you see a reason, an exit code, and the start and finish time for that container's period of execution.
You can use following command to delete the POD forcefully.
kubectl delete pod <PODNAME> --grace-period=0 --force --namespace <NAMESPACE>
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With