I have a bunch of pods in kubernetes which are completed (successfully or unsuccessfully) and I'd like to clean up the output of kubectl get pods
. Here's what I see when I run kubectl get pods
:
NAME READY STATUS RESTARTS AGE intent-insights-aws-org-73-ingest-391c9384 0/1 ImagePullBackOff 0 8d intent-postgres-f6dfcddcc-5qwl7 1/1 Running 0 23h redis-scheduler-dev-master-0 1/1 Running 0 10h redis-scheduler-dev-metrics-85b45bbcc7-ch24g 1/1 Running 0 6d redis-scheduler-dev-slave-74c7cbb557-dmvfg 1/1 Running 0 10h redis-scheduler-dev-slave-74c7cbb557-jhqwx 1/1 Running 0 5d scheduler-5f48b845b6-d5p4s 2/2 Running 0 36m snapshot-169-5af87b54 0/1 Completed 0 20m snapshot-169-8705f77c 0/1 Completed 0 1h snapshot-169-be6f4774 0/1 Completed 0 1h snapshot-169-ce9a8946 0/1 Completed 0 1h snapshot-169-d3099b06 0/1 ImagePullBackOff 0 24m snapshot-204-50714c88 0/1 Completed 0 21m snapshot-204-7c86df5a 0/1 Completed 0 1h snapshot-204-87f35e36 0/1 ImagePullBackOff 0 26m snapshot-204-b3a4c292 0/1 Completed 0 1h snapshot-204-c3d90db6 0/1 Completed 0 1h snapshot-245-3c9a7226 0/1 ImagePullBackOff 0 28m snapshot-245-45a907a0 0/1 Completed 0 21m snapshot-245-71911b06 0/1 Completed 0 1h snapshot-245-a8f5dd5e 0/1 Completed 0 1h snapshot-245-b9132236 0/1 Completed 0 1h snapshot-76-1e515338 0/1 Completed 0 22m snapshot-76-4a7d9a30 0/1 Completed 0 1h snapshot-76-9e168c9e 0/1 Completed 0 1h snapshot-76-ae510372 0/1 Completed 0 1h snapshot-76-f166eb18 0/1 ImagePullBackOff 0 30m train-169-65f88cec 0/1 Error 0 20m train-169-9c92f72a 0/1 Error 0 1h train-169-c935fc84 0/1 Error 0 1h train-169-d9593f80 0/1 Error 0 1h train-204-70729e42 0/1 Error 0 20m train-204-9203be3e 0/1 Error 0 1h train-204-d3f2337c 0/1 Error 0 1h train-204-e41a3e88 0/1 Error 0 1h train-245-7b65d1f2 0/1 Error 0 19m train-245-a7510d5a 0/1 Error 0 1h train-245-debf763e 0/1 Error 0 1h train-245-eec1908e 0/1 Error 0 1h train-76-86381784 0/1 Completed 0 19m train-76-b1fdc202 0/1 Error 0 1h train-76-e972af06 0/1 Error 0 1h train-76-f993c8d8 0/1 Completed 0 1h webserver-7fc9c69f4d-mnrjj 2/2 Running 0 36m worker-6997bf76bd-kvjx4 2/2 Running 0 25m worker-6997bf76bd-prxbg 2/2 Running 0 36m
and I'd like to get rid of the pods like train-204-d3f2337c
. How can I do that?
First, confirm the name of the node you want to remove using kubectl get nodes , and make sure that all of the pods on the node can be safely terminated without any special procedures. Next, use the kubectl drain command to evict all user pods from the node.
You can do this a bit easier, now.
You can list all completed pods by:
kubectl get pod --field-selector=status.phase==Succeeded
And delete all completed pods by:
kubectl delete pod --field-selector=status.phase==Succeeded
If this pods created by CronJob, you can use spec.failedJobsHistoryLimit
and spec.successfulJobsHistoryLimit
Example:
apiVersion: batch/v1beta1 kind: CronJob metadata: name: my-cron-job spec: schedule: "*/10 * * * *" failedJobsHistoryLimit: 1 successfulJobsHistoryLimit: 3 jobTemplate: spec: template: ...
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With