I am currently executing this command every 30 min via a bash script(on Centos 6) to delete files that are around 1 hr old.The problem is that the find
command is using 45% of my cpu at all times.Is there a way to optimise it.There is about 200k Items in cache at any point in time
find /dev/shm/cache -type f -mmin +59 -exec rm -f {} \;
You can try to run the process as lower priority using nice
:
nice -n 19 find ...
Another thing, it might not make a difference in performance,
but to delete matching files with find
,
a simpler way is -delete
instead of -exec
:
find /dev/shm/cache -type f -mmin +59 -delete
... that is if your version of find
supports it (thanks @chepner for pointing out) (and modern versions do...)
Your command is starting a new invocation of rm
for each file that's found, which can be very expensive. You can use an alternate syntax that sends multiple arguments to rm
, in batches as large as the OS allows. This is done by ending the command with +
instead of ;
find /dev/shm/cache -type f -mmin +59 -exec rm -f {} +
You can also use the -delete
option, as in janos's answer; it should be even more efficient because it doesn't have to run an external process at all. I'm showing this answer because it generalizes to other commands as well, which may not have equivalent options, e.g.
-exec grep foo {} +
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With