I am running a C++ program, which dies with std::bad_alloc
at arbitrary points, which depend on the input specified. Here are some observations/points about the program:
std::vector
and std::string
; it is the allocation inside these library classes which fails), so memory leaks are extremely unlikely.Is there anything else I should try? Any particular tools that could help? Any other suggestions?
UPDATE: It finally turned out that the virtual memory had been limited through ulimit -v
earlier. I forgot about this later, and hence had memory exhaustion. Setting it back to unlimited
fixed the problem.
std::bad_alloc
means that you have requested more memory than there is available.
You can have situations where a program has no leak, but still genuinely runs out of memory:
vector<long> v;
long n = 0;
for(;;)
{
v.push_back(n++);
}
will eventually exhaust all available memory in whatever machine you have - but it's not leaking - all memory is accounted for in the vector. Obviously, ANY container can be made to do the exact same thing, vector
, list
, map
, doesn't really matter.
Valgrind only finds instances where you "abandon" allocations, not where you are filling the system with currently reachable memory.
What LIKELY is happening is a slower form of the above - you are storing more and more in some container. It may be something you are caching, or something you are not removing when you thought you had removed it.
Watching the amount of memory on the application is actually using in some montioring program ("top" in Linux/Unix, "task manager" in Windows) and seeing if it actually grows. If that is the case, then you need to figure out what is growing - for a large program, that can be tricky (and some things perhaps SHOULD grow, others don't...)
It is of course also possible that you suddenly get some bad calculation, e.g. asking for a negative number of elements in T* p = new T[elements];
would cause bad alloc, since elements is converted to an unsigned, and negative unsigned numbers are HUGE.
If you can catch the bad_alloc in a debugger, that sort of thing is usually pretty easy to spot, because the large amount requested by new
will be quite obvious.
Catching the exception in the debugger should in general help, although it is of course possible that you are just allocating memory for a small string when it goes wrong, if you do have something that leaks, it's not unusual that this is what is allocating when it goes wrong.
If you are using a flavour of Unix, you could also, to speed up the error-finding, set the amount of memory the application is allowed to use to smaller size, using ulimit -m size
(in kilobytes) or ulimit -v size
.
std::bad_alloc
may possibly also mean that you are requesting a negative amount of data, even when there is enough memory in the machine.
This case happens easily on my 64-bit linux machine when I use a regular signed int (which is still 32-bit) instead of a long int (64 bits) to specify an array count, and I multiply two numbers that are too large in order to get the final count. The results of the multiplication quietly overflow at 2.147Gig and thus can turn negative.
Say for example you want to allocate 100 million points in a 21-dimensional space. No problem. Count is 2,100,000,000. Now increase the dimension size to 22, and it falls off a cliff. This is easy to verify with printf's:
int N = 100000000;
int D = 22;
int count = N * D;
printf("count = %'d\n", count);
gives
count = -2,094,967,296
and the std::bad_alloc
kicks in because the requested memory count is negative.
Ed.: I note in comments this seems to be an irreproducible result, as now new[count] is giving std::bad_array_new_length
after rebooting the machine. That is, the code still is incorrect and breaks, but the error message given is different than before. Don't do this in either case.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With