Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

fork() failing with Out of memory error

The parent process fails with errno=12(Out of memory) when it tries to fork a child. The parent process runs on Linux 3.0 kernel - SLES 11. At the point of forking the child, the parent process has already used up around 70% of the RAM(180GB/256GB). Is there any workaround for this problem?

The application is written in C++, compiled with g++ 4.6.3.

like image 935
Anirudh Jayakumar Avatar asked Mar 25 '13 05:03

Anirudh Jayakumar


2 Answers

Maybe virtual memory over commit is prevented in your system.

If it is prevented, then the virtual memory can not be bigger than sizeof physical RAM + swap. If it is allowed, then virtual memory can be bigger than RAM+swap.

When your process forks, your processes (parent and child) would have 2*180GB of virtual memory (that is too much if you don't have swap).

So, allow over commit by this way:

 echo 1 > /proc/sys/vm/overcommit_memory

It should help, if child process execves immediately, or frees allocated memory before the parent writes too much to own memory. So, be careful, out of memory killer may act if both processes keep using all the memory.

man page of proc(5) says:

/proc/sys/vm/overcommit_memory

This file contains the kernel virtual memory accounting mode. Values are: 0: heuristic overcommit (this is the default) 1: always overcommit, never check 2: always check, never overcommit

In mode 0, calls of mmap(2) with MAP_NORESERVE are not checked, and the default check is very weak, leading to the risk of getting a process "OOM-killed". Under Linux 2.4 any nonzero value implies mode 1. In mode 2 (available since Linux 2.6), the total virtual address space on the system is limited to (SS + RAM*(r/100)), where SS is the size of the swap space, and RAM is the size of the physical memory, and r is the contents of the file /proc/sys/vm/overcommit_ratio.

More information here: Overcommit Memory in SLES

like image 141
SKi Avatar answered Oct 27 '22 08:10

SKi


fork-ing requires resources, since it is copy-on-writing the writable pages of the process. Read again the fork(2) man page.

You could at least provide a huge temporary swap file. You could create (on some file system with enough space) a huge file $SWAPFILE with

  dd if=/dev/zero of=$SWAPFILE bs=1M count=256000
  mkswap $SWAPFILE
  swapon $SWAPFILE

Otherwise, you could for instance design your program differently, e.g. mmap-ing some big file (and munmap-ing it just before the fork, and mmap-ing it again after), or more simply starting at the beginning of your program a popen-ed shell, or a p2open-ed one or making explicitly the pipe-s to and from it (probably a multiplexing call à la poll would also be useful), and later issue commands to it.

Maybe we could help more if we had an idea of what your program is doing, why does it consume so much memory, and why and what is it forking...

Read Advanced Linux Programming for more.

PS.

If you fork just to run gdb to show the backtrace, consider simpler alternatives like recent GCC's libbacktrace or Wolf's libbacktrace...

like image 32
Basile Starynkevitch Avatar answered Oct 27 '22 08:10

Basile Starynkevitch