I have been reading about out of memory conditions on Linux, and the following paragraph from the man pages got me thinking:
By default, Linux follows an optimistic memory allocation strategy. This means that when malloc() returns non-NULL there is no guarantee that the memory really is available. This is a really bad bug. In case it turns out that the system is out of memory, one or more processes will be killed by the infamous OOM killer. [...]
Considering that the operator new implementation will end up calling malloc at some point, are there any guarantees that new will actually throw on Linux? If there aren't, how does one handle this apparently undetectable error situation?
It depends; you can configure the kernel's overcommit settings using vm.overcommit_memory.
Herb Sutter discussed a few years ago how this behavior is actually nonconforming to the C++ standard:
"On some operating systems, including specifically Linux, memory allocation always succeeds. Full stop. How can allocation always succeed, even when the requested memory really isn't available? The reason is that the allocation itself merely records a request for the memory; under the covers, the (physical or virtual) memory is not actually committed to the requesting process, with real backing store, until the memory is actually used.
"Note that, if new uses the operating system's facilities directly, then new will always succeed but any later innocent code like buf[100] = 'c'; can throw or fail or halt. From a Standard C++ point of view, both effects are nonconforming, because the C++ standard requires that if new can't commit enough memory it must fail (this doesn't), and that code like buf[100] = 'c' shouldn't throw an exception or otherwise fail (this might)."
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With