One of my sites runs extremely slow,
and I use top
command to see that "rsyslogd" cost 170M memory,
is that normal?
If not,how can I limit the size of memory "rsyslogd" cost,or the frequency the "rsyslogd"
runs?
Yes and No. Generally you are using file/disk queue mode. It caches the writes to a buffer and writes out a block at time instead of an inefficent line by line at a time with open and close; reducing unnecessary and small disk accesses.
The problem lies in the fact that it makes a 10MB buffer for every file its logging. 20 log files means 200+MB. The number of log files can always be reduced, but it also possible to reduce the buffer size if you are not running a raid (big-block) or hi-demand system. The documentation is here: http://www.rsyslog.com/doc/v8-stable/concepts/queues.html#disk-queues , ”$<object>QueueMaxFileSize” to reduce the size of each buffer. 4MB can cut you down to 70MB
Sounds like you've got some process logging way too much info. You might just look at the logs and see who's doing all the writing and see if you can get them to stop. I've seen logs hit gigabyte sizes when some program has a recurring fault that causes it to log the same error message thousands of times a second. Seriously check the logs and just see who the heck is hammering rsyslogd.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With