Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Is it normal for "rsyslogd" to cost 170M memory?

Tags:

linux

bash

One of my sites runs extremely slow,

and I use top command to see that "rsyslogd" cost 170M memory,

is that normal?

If not,how can I limit the size of memory "rsyslogd" cost,or the frequency the "rsyslogd"

runs?

like image 569
omg Avatar asked May 26 '09 19:05

omg


2 Answers

Yes and No. Generally you are using file/disk queue mode. It caches the writes to a buffer and writes out a block at time instead of an inefficent line by line at a time with open and close; reducing unnecessary and small disk accesses.

The problem lies in the fact that it makes a 10MB buffer for every file its logging. 20 log files means 200+MB. The number of log files can always be reduced, but it also possible to reduce the buffer size if you are not running a raid (big-block) or hi-demand system. The documentation is here: http://www.rsyslog.com/doc/v8-stable/concepts/queues.html#disk-queues , ”$<object>QueueMaxFileSize” to reduce the size of each buffer. 4MB can cut you down to 70MB

like image 138
ppostma1 Avatar answered Sep 22 '22 20:09

ppostma1


Sounds like you've got some process logging way too much info. You might just look at the logs and see who's doing all the writing and see if you can get them to stop. I've seen logs hit gigabyte sizes when some program has a recurring fault that causes it to log the same error message thousands of times a second. Seriously check the logs and just see who the heck is hammering rsyslogd.

like image 34
Robert S. Barnes Avatar answered Sep 23 '22 20:09

Robert S. Barnes