My current configs are:
> cat /proc/sys/vm/panic_on_oom
0
> cat /proc/sys/vm/oom_kill_allocating_task
0
> cat /proc/sys/vm/overcommit_memory
1
but when I run a task, it's killed anyway.
> ./test/mem.sh
Killed
> dmesg | tail -2
[24281.788131] Memory cgroup out of memory: Kill process 10565 (bash) score 1001 or sacrifice child
[24281.788133] Killed process 10565 (bash) total-vm:12601088kB, anon-rss:5242544kB, file-rss:64kB
My tasks are used to scientific computing, which costs many memories, it seems that overcommit_memory=1
may be the best choice.
Actually, I'm working on a data analyzation project, which costs memory more than 16G
, but I was asked to limit them in about 5G
. It might be impossible to implement this requirement via optimizing the program itself, because the project uses many sub-commands, and most of them does not contains options like Xms
or Xmx
in Java.
My project should be an overcommited system. Exacetly as what a3f saying, it seems that my apps prefer to crash by xmalloc
when mem allocated failed.
> cat /proc/sys/vm/overcommit_memory
2
> ./test/mem.sh
./test/mem.sh: xmalloc: .././subst.c:3542: cannot allocate 1073741825 bytes (4295237632 bytes allocated)
I don't want to surrender, although so many aweful tests make me exhausted. So please show me a way to the light ; )
The server runs the risk of crashing because it ran out of memory. To prevent the server from reaching that critical state, the kernel also contains a process known as the OOM Killer. The kernel uses this process to start killing non-essential processes so the server can remain operational.
If for example a process crucial to your web application was killed as a result of out of memory situation, you have a couple of options, reduce the amount of memory asked by the process, disallow processes to overcommit memory, or simply add more memory to your server configuration.
So when kernel memory is run out, it will trigger the OOM killer to free up memory of user space process. If kernel keeps kmalloc to run out of all the memory of the machine, what will happen? Yes, the kernel will run its own reclaim, and if necessary (and possible), it will trigger the OOM killer.
On Linux, the Out of Memory Killer (OOM Killer) is a process in charge of preventing other processes from collectively exhausting the host's memory.
The OOM killer won't go away. If there is no memory, someone's got to pay. What you can do is set a limit after which memory allocations fail.
That's exactly what setting vm.overcommit_memory
to 2
achieves.
From the docs:
The Linux kernel supports the following overcommit handling modes
2 - Don't overcommit. The total address space commit for the system is not permitted to exceed swap + a configurable amount (default is 50%) of physical RAM. Depending on the amount you use, in most situations this means a process will not be killed while accessing pages but will receive errors on memory allocation as appropriate.
Normally, the kernel will happily hand out virtual memory (overcommit). Only when you reference a page, the kernel has to map the page to a real physical frame. If it can't service that request, a process needs to be killed by the OOM killer to make space.
Disabling overcommit means that e.g. malloc(3)
will return NULL
if the kernel couldn't commit the amount of memory requested. This makes things a bit more predictable, albeit limited (many applications allocate more than they would ever need).
The possible values of oom_adj range from -17 to +15. The higher the score, more likely the associated process is to be killed by OOM-killer. If oom_adj is set to -17, the process is not considered for OOM-killing.
But, increase ram is better choice ,if increasing ram is not possible, then add swap memory.
To increase swap memory try this link,
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With