I have a program that implements several heuristic search algorithms and several domains, designed to experimentally evaluate the various algorithms. The program is written in C++, built using the GNU toolchain, and run on a 64-bit Ubuntu system. When I run my experiments, I use bash's ulimit
command to limit the amount of virtual memory the process can use, so that my test system does not start swapping.
Certain algorithm/test instance combinations hit the memory limit I have defined. Most of the time, the program throws an std::bad_alloc exception, which is printed by the default handler, at which point the program terminates. Occasionally, rather than this happening, the program simply segfaults.
Why does my program occasionally segfault when out of memory, rather than reporting an unhandled std::bad_alloc and terminating?
A segfault will occur when a program attempts to operate on a memory location in a way that is not allowed (for example, attempts to write a read-only location would result in a segfault). Segfaults can also occur when your program runs out of stack space.
Segmentation faults are a common class of error in programs written in languages like C that provide low-level memory access and few to no safety checks. They arise primarily due to errors in use of pointers for virtual memory addressing, particularly illegal access.
No, memory leaks by themselves would not cause a segmentation fault. However, memory leaks usually indicate sloppy code, and in sloppy code other issues, which would cause a segmentation fault, are likely to be present.
std::bad_alloc is a type of exception that occurs when the new operator fails to allocate the requested space. This type of exception is thrown by the standard definitions of operator new (declaring a variable) and operator new[] (declaring an array) when they fail to allocate the requested storage space.
One reason might be that by default Linux overcommits memory. Requesting memory from the kernel appears to work alright, but later on when you actually start using the memory the kernel notices "Oh crap, I'm running out of memory", invokes the out-of-memory (OOM) killer which selects some victim process and kills it.
For a description of this behavior, see http://lwn.net/Articles/104185/
It could be some code using no-throw new and not checking the return value.
Or some code could be catching the exception and not handling it or rethrowing it.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With