Probably by the kernel as suggested in this question. I would like to see why I was killed, something like the function the assassination took place. :)
Moreover, is there anything I can do to allow my program execute normally?
Chronicle
My program executes properly. However, we encountered a big dataset, 1.000.000 x 960 floats and my laptop at home couldn't take it (gave an std::bad_alloc()
).
Now, I am in the lab, in a desktop with 9.8 GiB at a processor 3.00GHz × 4, which has more than twice of the memory the laptop at home has.
At home, the data set could not be loaded in the std::vector
, where the data is stored. Here, in the lab, this was accomplished and the program continued with building a data structure.
That was the last time I heard from it:
Start building...
Killed
The desktop in the lab runs on Debian 8. My program runs as expected for a subset of the data set, in particular 1.00.000 x 960 floats.
EDIT
strace
output is finally available:
...
brk..
brk(0x352435000) = 0x352414000
mmap(NULL, 1048576, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = -1 ENOMEM (Cannot allocate memory)
mmap(NULL, 134217728, PROT_NONE, MAP_PRIVATE|MAP_ANONYMOUS|MAP_NORESERVE, -1, 0) = 0x7f09c1563000
munmap(0x7f09c1563000, 44683264) = 0
munmap(0x7f09c8000000, 22425600) = 0
mprotect(0x7f09c4000000, 135168, PROT_READ|PROT_WRITE) = 0
...
mprotect(0x7f09c6360000, 8003584, PROT_READ|PROT_WRITE) = 0
+++ killed by SIGKILL +++
So this tells us I am out of memory, I guess.
The kill () system call in C language is used by the operating system to send a discontinuation signal to a process, urging it to exit. On the other hand, a kill system call does not always imply that the process is being terminated; it might have a variety of meanings.
This is a result of the Linux Out Of Memory (OOM) manager killing processes in a last ditch effort to keep the system as a whole running when memory is exhausted.
In C++, a float is a single (32 bit) floating point number: http://en.wikipedia.org/wiki/Single-precision_floating-point_format
which means that you are allocating (without overhead) 3 840 000 000 bytes of data.
or roughly 3,57627869 gigabytes..
Lets safely assume that the header of the vector is nothing compared to the data, and continue with this number..
This is a huge amount of data to build up, Linux may assume that this is just a memoryleak, and protect it self by killing the application:
https://unix.stackexchange.com/questions/136291/will-linux-start-killing-my-processes-without-asking-me-if-memory-gets-short
I don't think this is an overcommit problem, since you are actually utillizing nearly half the memory in a single application.
but perhaps.. consider this just for fun.. are you building an 32bit application? you are getting close to the 2^32 (4Gb) memory space that can be addresssed by your program if it's a 32 bit build..
So in case you have another large vector allocated... bum bum bum
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With