I have a logrotate config on ubuntu 16.04 which is meant to rotate my logs to gz daily. The config is like so:
/opt/dcm4chee/server/default/log/*.log {
daily
missingok
rotate 5
compress
notifempty
create 0640 dcm4chee dcm4chee
sharedscripts
copytruncate
}
It produces correctly the gzipped logs:
server.log.1.gz
...
server.log.5.gz
However it also produces a bunch of unwanted "backups" rather sporadically which are causing runaway disk usage over time - we are operating on limited disk space VMs:
server.log.1-2018063006.backup
...
server.log.1-2018081406.backup
This completely defeats my original purpose of capping disk usage by rotating and compressing a finite number of logs in the first place.
How do I stop logrotate from generating these 'backups' completely? If this means losing a few lines of logging, so be it.
I am unable to find documentation on the matter. Currently I have a crontab setup which deletes these files periodically, but it doesn't seem like the 'right' way to do things.
Running into same issue and found out that it's caused by duplicate log files.
In my case, I'm logrotating some nginx logs, and by using the create method, this sometimes happens, when its trying to create a new log file, somehow nginx still produces new logs and leads to below errors:
error: destination /[example path]/access.log already exists, renaming to /[example path]/access.log-2018122810.backup
and so it keep making tons of ".backup" files and consume my disk space.
After doing some research, I just couldn't find a good way to kill all nginx processes, so I temporary fix it by adding copytruncate
inside my logrotate.d config, and it seems to solve the issue but have to take the risk of you may lost some log.
Hope there's a much more better solution~
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With