Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Linux memory usage history

I had a problem in which my server began failing some of its normal processes and checks because the server's memory was completely full and taken.

I looked in the logging history and found that what it killed were some Java processes.

I used the "top" command to see what processes were taking up the most memory right now(after the issue was fixed) and it was a Java process. So in essence, I can tell what processes are taking up the most memory right now.

What I want to know is if there is a way to see what processes were taking up the most memory at the time when the failures started happening? Perhaps Linux keeps track or a log of the memory usage at particular times? I really have no idea but it would be great if I could see that kind of detail.

like image 330
Dan Avatar asked Oct 24 '12 15:10

Dan


3 Answers

@Andy has answered your question. However, I'd like to add that for future reference use a monitoring tool. Something like these. These will give you what happened during a crash since you obviously cannot monitor all your servers all the time. Hope it helps.

like image 112
Kevin Avatar answered Oct 06 '22 09:10

Kevin


Are you saying the kernel OOM killer went off? What does the log in dmesg say? Note that you can constrain a JVM to use a fixed heap size, which means it will fail affirmatively when full instead of letting the kernel kill something else. But the general answer to your question is no: there's no way to reliably run anything at the time of an OOM failure, because the system is out of memory! At best, you can use a separate process to poll the process table and log process sizes to catch memory leak conditions, etc...

like image 28
Andy Ross Avatar answered Oct 06 '22 11:10

Andy Ross


There is no history of memory usage in linux be default, but you can achieve it with some simple command-line tool like sar.

Regarding your problem with memory: If it was OOM-killer that did some mess on machine, then you have one great option to ensure it won't happen again (of course after reducing JVM heap size).

By default linux kernel allocates more memory than it has really. This, in some cases, can lead to OOM-killer killing the most memory-consumptive process if there is no memory for kernel tasks. This behavior is controlled by vm.overcommit sysctl parameter.

So, you can try setting it to vm.overcommit = 2 is sysctl.conf and then run sysctl -p.

This will forbid overcommiting and make possibility of OOM-killer doing nasty things very low. Also you can think about adding a little-bit of swap space (if you don't have it already) and setting vm.swappiness to some really low value (like 5, for example. default value is 60), so in normal workflow your application won't go into swap, but if you'll be really short on memory, it will start using it temporarily and you will be able to see it even with df

WARNING this can lead to processes receiving "Cannot allocate memory" error if you have your server overloaded by memory. In this case:

  1. Try to restrict memory usage by applications
  2. Move part of them to another machine
like image 38
Pavel Kazhevets Avatar answered Oct 06 '22 09:10

Pavel Kazhevets