Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Program terminating with std::bad_alloc

I am running a C++ program, which dies with std::bad_alloc at arbitrary points, which depend on the input specified. Here are some observations/points about the program:

  • for shorter runs (the running time depends on the input), the program completes normally. The problem comes only for larger runs.
  • the program does not have any detectable memory leaks. This was checked with Valgrind/Memcheck for smaller runs. Moreover, my entire code does not have any pointers (all dynamic allocations are done by the libraries, e.g., in std::vector and std::string; it is the allocation inside these library classes which fails), so memory leaks are extremely unlikely.
  • several objects are allocated in loops and then moved to containers. Several of these objects are intended to be alive until almost the end of the program.
  • I suspected heap fragmentation could be an issue (see C++ program dies with std::bad_alloc, BUT valgrind reports no memory leaks) but I am on a 64-bit system with a 64-bit compiler (specifically Linux with g++) and Heap fragmentation in 64 bit land leads me to believe heap fragmentation cannot be an issue on 64-bit systems.

Is there anything else I should try? Any particular tools that could help? Any other suggestions?

UPDATE: It finally turned out that the virtual memory had been limited through ulimit -v earlier. I forgot about this later, and hence had memory exhaustion. Setting it back to unlimited fixed the problem.

like image 477
r.v Avatar asked Aug 18 '13 21:08

r.v


2 Answers

std::bad_alloc means that you have requested more memory than there is available.

You can have situations where a program has no leak, but still genuinely runs out of memory:

vector<long> v;
long n = 0;
for(;;)
{
   v.push_back(n++);
}

will eventually exhaust all available memory in whatever machine you have - but it's not leaking - all memory is accounted for in the vector. Obviously, ANY container can be made to do the exact same thing, vector, list, map, doesn't really matter.

Valgrind only finds instances where you "abandon" allocations, not where you are filling the system with currently reachable memory.

What LIKELY is happening is a slower form of the above - you are storing more and more in some container. It may be something you are caching, or something you are not removing when you thought you had removed it.

Watching the amount of memory on the application is actually using in some montioring program ("top" in Linux/Unix, "task manager" in Windows) and seeing if it actually grows. If that is the case, then you need to figure out what is growing - for a large program, that can be tricky (and some things perhaps SHOULD grow, others don't...)

It is of course also possible that you suddenly get some bad calculation, e.g. asking for a negative number of elements in T* p = new T[elements]; would cause bad alloc, since elements is converted to an unsigned, and negative unsigned numbers are HUGE.

If you can catch the bad_alloc in a debugger, that sort of thing is usually pretty easy to spot, because the large amount requested by new will be quite obvious.

Catching the exception in the debugger should in general help, although it is of course possible that you are just allocating memory for a small string when it goes wrong, if you do have something that leaks, it's not unusual that this is what is allocating when it goes wrong.

If you are using a flavour of Unix, you could also, to speed up the error-finding, set the amount of memory the application is allowed to use to smaller size, using ulimit -m size (in kilobytes) or ulimit -v size.

like image 194
Mats Petersson Avatar answered Oct 01 '22 16:10

Mats Petersson


std::bad_alloc may possibly also mean that you are requesting a negative amount of data, even when there is enough memory in the machine.

This case happens easily on my 64-bit linux machine when I use a regular signed int (which is still 32-bit) instead of a long int (64 bits) to specify an array count, and I multiply two numbers that are too large in order to get the final count. The results of the multiplication quietly overflow at 2.147Gig and thus can turn negative.

Say for example you want to allocate 100 million points in a 21-dimensional space. No problem. Count is 2,100,000,000. Now increase the dimension size to 22, and it falls off a cliff. This is easy to verify with printf's:

int N = 100000000;
int D = 22;
int count = N * D;
printf("count = %'d\n", count);

gives

count = -2,094,967,296

and the std::bad_alloc kicks in because the requested memory count is negative.

Ed.: I note in comments this seems to be an irreproducible result, as now new[count] is giving std::bad_array_new_length after rebooting the machine. That is, the code still is incorrect and breaks, but the error message given is different than before. Don't do this in either case.

like image 40
DragonLord Avatar answered Oct 01 '22 16:10

DragonLord