Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Why the process is getting killed at 4GB?

I have written a program which works on huge set of data. My CPU and OS(Ubuntu) both are 64 bit and I have got 4GB of RAM. Using "top" (%Mem field), I saw that the process's memory consumption went up to around 87% i.e 3.4+ GB and then it got killed.

I then checked how much memory a process can access using "uname -m" which comes out to be "unlimited".

Now, since both the OS and CPU are 64 bit and also there exists a swap partition, the OS should have used the virtual memory i.e [ >3.4GB + yGB from swap space ] in total and only if the process required more memory, it should have been killed.

So, I have following ques:

  1. How much physical memory can a process access theoretically on 64 bit m/c. My answer is 2^48 bytes.
  2. If less than 2^48 bytes of physical memory exists, then OS should use virtual memory, correct?
  3. If ans to above ques is YES, then OS should have used SWAP space as well, why did it kill the process w/o even using it. I dont think we have to use some specific system calls which coding our program to make this happen.

Please suggest.

like image 353
Piyush Kansal Avatar asked Oct 23 '22 13:10

Piyush Kansal


2 Answers

It's not only the data size that could be the reason. For example, do ulimit -a and check the max stack size. Have you got a kill reason? Set 'ulimit -c 20000' to get a core file, it shows you the reason when you examine it with gdb.

like image 115
ott-- Avatar answered Oct 27 '22 09:10

ott--


Check with file and ldd that your executable is indeed 64 bits.

Check also the resource limits. From inside the process, you could use getrlimit system call (and setrlimit to change them, when possible). From a bash shell, try ulimit -a. From a zsh shell try limit.

Check also that your process indeed eats the memory you believe it does consume. If its pid is 1234 you could try pmap 1234. From inside the process you could read the /proc/self/maps or /proc/1234/maps (which you can read from a terminal). There is also the /proc/self/smaps or /proc/1234/smaps and /proc/self/status or /proc/1234/status and other files inside your /proc/self/ ...

Check with  free that you got the memory (and the swap space) you believe. You can add some temporary swap space with swapon /tmp/someswapfile (and use mkswap to initialize it).

I was routinely able, a few months (and a couple of years) ago, to run a 7Gb process (a huge cc1 compilation), under Gnu/Linux/Debian/Sid/AMD64, on a machine with 8Gb RAM.

And you could try with a tiny test program, which e.g. allocates with malloc several memory chunks of e.g. 32Mb each. Don't forget to write some bytes inside (at least at each megabyte).

standard C++ containers like std::map or std::vector are rumored to consume more memory than what we usually think.

Buy more RAM if needed. It is quite cheap these days.

like image 24
Basile Starynkevitch Avatar answered Oct 27 '22 11:10

Basile Starynkevitch