You can delete them at once with kubectl delete jobs --all , if you want to delete all jobs in the current namespace (not just the ones created by "hello".)
Delete the job with kubectl (e.g. kubectl delete jobs/pi or kubectl delete -f ./job. yaml ). When you delete the job using kubectl , all the pods it created are deleted too.
How do you configure a Kubernetes Job so that Pods are retained after completion? [] Set an activeDeadlineSeconds value high enough to allow you to access the logs. [] Set a startingDeadlineSeconds value high enough to allow you to access the logs. [] Configure the backofflimit parameter with a non-zero value.
You can now set history limits, or disable history altogether, so that failed or successful CronJobs are not kept around indefinitely. See my answer here. Documentation is here.
To set the history limits:
The
.spec.successfulJobsHistoryLimit
and.spec.failedJobsHistoryLimit
fields are optional. These fields specify how many completed and failed jobs should be kept. By default, they are set to 3 and 1 respectively. Setting a limit to0
corresponds to keeping none of the corresponding kind of jobs after they finish.
The config with 0 limits would look like:
apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: hello
spec:
schedule: "*/1 * * * *"
successfulJobsHistoryLimit: 0
failedJobsHistoryLimit: 0
jobTemplate:
spec:
template:
spec:
containers:
- name: hello
image: busybox
args:
- /bin/sh
- -c
- date; echo Hello from the Kubernetes cluster
restartPolicy: OnFailure
This is possible from version 1.12 Alpha with ttlSecondsAfterFinished
. An example from Clean Up Finished Jobs Automatically:
apiVersion: batch/v1
kind: Job
metadata:
name: pi-with-ttl
spec:
ttlSecondsAfterFinished: 100
template:
spec:
containers:
- name: pi
image: perl
command: ["perl", "-Mbignum=bpi", "-wle", "print bpi(2000)"]
restartPolicy: Never
I've found the below to work
To remove failed jobs:
kubectl delete job $(kubectl get jobs | awk '$3 ~ 0' | awk '{print $1}')
To remove completed jobs:
kubectl delete job $(kubectl get jobs | awk '$3 ~ 1' | awk '{print $1}')
Another way using a field-selector:
kubectl delete jobs --field-selector status.successful=1
That could be executed in a cronjob, similar to others answers.
my-sa-name
# 1. Create a service account
apiVersion: v1
kind: ServiceAccount
metadata:
name: my-sa-name
namespace: default
---
# 2. Create a role
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
namespace: default
name: my-completed-jobs-cleaner-role
rules:
- apiGroups: [""]
resources: ["jobs"]
verbs: ["list", "delete"]
---
# 3. Attach the role to the service account
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: my-completed-jobs-cleaner-rolebinding
namespace: default
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: my-completed-jobs-cleaner-role
subjects:
- kind: ServiceAccount
name: my-sa-name
namespace: default
---
# 4. Create a cronjob (with a crontab schedule) using the service account to check for completed jobs
apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: jobs-cleanup
spec:
schedule: "*/30 * * * *"
jobTemplate:
spec:
template:
spec:
serviceAccountName: my-sa-name
containers:
- name: kubectl-container
image: bitnami/kubectl:latest
# I'm using bitnami kubectl, because the suggested kubectl image didn't had the `field-selector` option
command: ["sh", "-c", "kubectl delete jobs --field-selector status.successful=1"]
restartPolicy: Never
i'm using wernight/kubectl's kubectl image
scheduled a cron deleting anything that is
completed
2 - 9 days old
(so I have 2 days to review any failed jobs)it runs every 30mins so i'm not accounting for jobs that are 10+ days old
apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: cleanup
spec:
schedule: "*/30 * * * *"
jobTemplate:
spec:
template:
spec:
containers:
- name: kubectl-runner
image: wernight/kubectl
command: ["sh", "-c", "kubectl get jobs | awk '$4 ~ /[2-9]d$/ || $3 ~ 1' | awk '{print $1}' | xargs kubectl delete job"]
restartPolicy: Never
I recently built a kubernetes-operator to do this task.
After deploy it will monitor selected namespace and delete completed jobs/pods if they completed without errors/restarts.
https://github.com/lwolf/kube-cleanup-operator
Using jsonpath:
kubectl delete job $(kubectl get job -o=jsonpath='{.items[?(@.status.succeeded==1)].metadata.name}')
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With