when I run the cronjob into Kubernetes, that time cron gives me to success the cron but not getting the desired result
apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: {{ $.Values.appName }}
namespace: {{ $.Values.appName }}
spec:
schedule: "* * * * *"
concurrencyPolicy: Forbid
jobTemplate:
spec:
template:
spec:
containers:
- name: test
image: image
command: ["/bin/bash"]
args: [ "test.sh" ]
restartPolicy: OnFailure
also, I am sharing test.sh
#!/bin/sh
rm -rf /tmp/*.*
echo "remove done"
cronjob running successfully but when I checked the container the file is not getting deleted into /tmp directory
You can directly check the status of the jobs. Cronjob is just controlling a Kubernetes job. Run kubectl get jobs and it will give you the completion status.
Kubernetes Jobs are used to constructing transitory pods that do the duties that have been allocated to them. CronJobs do the same function, except they run tasks on a predefined schedule. Jobs are essential in Kubernetes for conducting batch processes or significant ad-hoc actions.
Using the grep command, you can view the log to see the last time when the specific script in the cron job was executed. If the cron job does not produce a visible output, then you would need to check to see if the cron job has actually taken place.
One CronJob object is like one line of a crontab (cron table) file. It runs a job periodically on a given schedule, written in Cron format. All CronJob schedule: times are based on the timezone of the kube-controller-manager.
It spawns new cron job every 60 seconds according to the schedule, no matter if it fails or runs successfully. In this particular example it is configured to fail as we are trying to run non-existing-command. $ kubectl get pods NAME READY STATUS RESTARTS AGE hello-1587558720-pgqq9 0/1 Error 0 61s hello-1587558780-gpzxl 0/1 ContainerCreating 0 1s
If this field is not specified, the jobs have no deadline. If the .spec.startingDeadlineSeconds field is set (not null), the CronJob controller measures the time between when a job is expected to be created and now.
To illustrate this concept further, suppose a CronJob is set to schedule a new Job every one minute beginning at 08:30:00, and its startingDeadlineSeconds is set to 200 seconds. If the CronJob controller happens to be down for the same period as the previous example ( 08:29:00 to 10:21:00 ,) the Job will still start at 10:22:00.
You need to have the persistence volume attached with you pod and cronjob that you are using so it can remove all the files when the script get executed. You need to mount and provide path accordingly in your script. For adding kubernetes cronjobs kindly go through this link
Cronjob run in one specific container, if you want to remove the file or directory from another container it won't work.
If your main container running under deployment while when your job or cronjob gets triggered it create a new container (POD) which has a separate file system and mount option.
if you want to achieve this scenario you have to use the PVC with ReadWriteMany where multiple containers (POD) can connect with your single PVC and share the file system.
In this way then your cronjob container (POD) get started with the existing PVC file system and you can remove the directory using job or cronjobs.
mount the same PVC to the cronjob container and the main container and it will work.
Change test.sh
to:
#!/bin/sh
set -e
rm -rf /tmp/*.*
echo "remove done"
Without -e
your bash script will return with the same status as its last command. In this case, it's an echo
so it will always have status 0
(success). Using set -e
will make your script abort and fail if the rm
command fails.
Also, without any volume mounts, this cron job does not do anything meaningful. If you want to delete some files from another container, you would need to use Cron within that container (or have a volume with ReadWriteMany
).
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With