Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Why new[-1] generates segfault, while new[-2] throws bad_alloc?

I tried to test bad_alloc exception by passing some negative arguments to new[]. When passing small negative numbers I get what I hoped for - a bad_alloc. However, when passing -1, I can see that my object is constructed thousands of times (I print static counter in constructor) and the application terminates with segfault.

new[] converts signed integer to size_t, so -1 is the max of size_t and -2 is the maximum - 1 and so on.

So why new[] throws exception when receiving some huge number, but tries to allocate when receiving the max of size_t? What is the difference between 1111...1 and 1111...0 for new[]? :)

Thanks in advance!

like image 806
flyjohny Avatar asked Mar 13 '12 09:03

flyjohny


1 Answers

Here's my wild guess:

In lot of implementations, the allocator will place some meta-data next to the allocated region.
(For example, the size of the allocation.) So you are, in effect, allocating more than what you asked for.

Let's assume size_t is 32-bits. Compiled for 32-bits.


When you do:

int *array = new int[-1];

The -1 becomes -1 * 4 bytes = 4294967292 (after overflow). But if the allocator implementation puts 4-bytes of meta-data next to the allocated region. The actual size becomes:

4294967292 + 4 bytes = 0 bytes (after overflow)

So 0 bytes is actually allocated.

When you try to access the memory, you segfault since you go out-of-bounds immediately.


Now let's say you do:

int *array = new int[-2];

The -2 becomes -2 * 4 bytes = 4294967288 (after overflow). Append 4-bytes of meta-data and you get 4294967288 + 4 = 4294967292.

When the allocator requests 4294967292 bytes from the OS, it is denied. So it throws bad_alloc.


So basically, it's possible that -1 and -2 makes the difference between whether or not it will overflow after the allocator appends its meta-data.

like image 120
Mysticial Avatar answered Nov 02 '22 06:11

Mysticial