First of all, using delete
for anything allocated with new[]
is undefined behaviour according to C++ standard.
In Visual C++ 7 such pairing can lead to one of the two consequences.
If the type new[]'ed has trivial constructor and destructor VC++ simply uses new
instead of new[]
and using delete
for that block works fine - new
just calls "allocate memory", delete
just calls "free memory".
If the type new[]'ed has a non-trivial constructor or destructor the above trick can't be done - VC++7 has to invoke exactly the right number of destructors. So it prepends the array with a size_t
storing the number of elements. Now the address returned by new[]
points onto the first element, not onto the beginning of the block. So if delete
is used it only calls the destructor for the first element and the calls "free memory" with the address different from the one returned by "allocate memory" and this leads to some error indicaton inside HeapFree() which I suspect refers to heap corruption.
Yet every here and there one can read false statements that using delete
after new[]
leads to a memory leak. I suspect that anything size of heap corruption is much more important than a fact that the destructor is called for the first element only and possibly the destructors not called didn't free heap-allocated sub-objects.
How could using delete
after new[]
possibly lead only to a memory leak on some C++ implementation?
The delete [] operator deallocates memory and calls destructors for an array of objects created with new [] . Using delete on a pointer returned by new [] or delete [] on a pointer returned by new results in undefined behavior.
Any time you allocate an array of objects via new (usually with the [ n ] in the new expression), you must use [] in the delete statement. This syntax is necessary because there is no syntactic difference between a pointer to a thing and a pointer to an array of things (something we inherited from C).
The code calls operator new[] to allocate memory for 10 string object, then call the default string constructor for each array element. In the way, when the delete operator is used on an array, it calls a destructor for each array element and then calls operator delete[] to deallocate the memory.
Memory leaks occur when new memory is allocated dynamically and never deallocated. In C programs, new memory is allocated by the malloc or calloc functions, and deallocated by the free function.
Suppose I'm a C++ compiler, and I implement my memory management like this: I prepend every block of reserved memory with the size of the memory, in bytes. Something like this;
| size | data ... | ^ pointer returned by new and new[]
Note that, in terms of memory allocation, there is no difference between new
and new[]
: both just allocate a block of memory of a certain size.
Now how will delete[]
know the size of the array, in order to call the right number of destructors? Simply divide the size
of the memory block by sizeof(T)
, where T
is the type of elements of the array.
Now suppose I implement delete
as simply one call to the destructor, followed by the freeing of the size
bytes, then the destructors of the subsequent elements will never be called. This results in leaking resources allocated by the subsequent elements. Yet, because I do free size
bytes (not sizeof(T)
bytes), no heap corruption occurs.
The fairy tale about mixing new[]
and delete
allegedly causing a memory leak is just that: a fairy tale. It has absolutely no footing in reality. I don't know where it came from, but by now it acquired a life of its own and survives like a virus, propagating by the word of mouth from one beginner to another.
The most likely rationale behind this "memory leak" nonsense is that from the innocently naive point of view the difference between delete
and delete[]
is that delete
is used to destroy just one object, while delete[]
destroys an array of objects ("many" objects). A naive conclusion that is usually derived from this is that the first element of the array will be destroyed by delete
, while the rest will persist, thus creating the alleged "memory leak". Of course, any programmer with at least basic understanding of typical heap implementations would immediately understand that the most likely consequence of that is heap corruption, not a "memory leak".
Another popular explanation for the naive "memory leak" theory is that since the wrong number of destructors gets called, the secondary memory owned by the objects in the array does not get deallocated. This might be true, but it is obviously a very forced explanation, which bears little relevance in the face of much more serious problem with heap corruption.
In short, mixing different allocation functions is one of those error that lead to solid, unpredictable and very practical undefined behavior. Any attempts to impose some concrete limits on the manifestations of this undefined behavior are just waste of time and sure sign of the lack of basic understanding.
Needless to add, new/delete
and new[]/delete[]
are in fact two independent memory management mechanisms, which are independently customizable. Once they get customized (by replacing raw memory management functions) there's absolutely no way to even begin to predict what might happen if they get mixed.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With