Current draft standard explicitly states that placement new[]
can have a space overhead:
This overhead may be applied in all array new-expressions, including those referencing the library function operator new[](std::size_t, void*) and other placement allocation functions. The amount of overhead may vary from one invocation of new to another.
So presumably they have something in mind, why a compiler need this overhead. What is it? Can a compiler use this overhead for anything useful?
In my understanding, to destruct this array, the only solution is to call destructors in a loop (am I right on this?), as there is no placement delete[]
(btw, shouldn't we have placement delete[]
to properly destruct the array, not just its elements?). So the compiler doesn't have to know the array length.
I thought as this overhead cannot be used for anything useful, compilers don't use it (so this is not an issue in practice). I've checked compilers with this simple code:
#include <stdio.h>
#include <new>
struct Foo {
~Foo() { }
};
int main() {
char buffer1[1024];
char buffer2[1024];
float *fl = new(buffer1) float[3];
Foo *foo = new(buffer2) Foo[3];
printf("overhead for float[]: %d\n", (int)(reinterpret_cast<char*>(fl) - buffer1));
printf("overhead for Foo[] : %d\n", (int)(reinterpret_cast<char*>(foo) - buffer2));
}
GCC and clang doesn't use any overhead at all. But, MSVC uses 8 bytes for the Foo
case. For what purpose could MSVC use this overhead?
Here's some background, why I put this question.
There were previous questions about this subject:
As far as I see, the moral of these questions is to avoid using placement new[]
, and use placement new
in a loop. But this solution doesn't create an array, but elements which are sitting next to each other, which is not an array, using operator[]
is undefined behavior for them. These questions are more about how to avoid placement new[]
, but this question is more about the "why?".
Placement new is a variation new operator in C++. Normal new operator does two things : (1) Allocates memory (2) Constructs an object in allocated memory. Placement new allows us to separate above two things. In placement new, we can pass a preallocated memory and construct an object in the passed memory.
A typical declaration for an array in C++ is: type name [elements]; where type is a valid type (such as int , float ...), name is a valid identifier and the elements field (which is always enclosed in square brackets [] ), specifies the length of the array in terms of the number of elements. int foo [5];
Placement new allows you to construct an object in memory that's already allocated. You may want to do this for optimization when you need to construct multiple instances of an object, and it is faster not to re-allocate memory each time you need a new instance.
Current draft standard explicitly states ...
To clarify, this rule has (probably) existed since first version of the standard (earliest version I have access to is C++03, which does contain that rule, and I found no defect report about needing to add the rule).
So presumably they have something in mind, why a compiler need this overhead
My suspicion is that the standard committee didn't have any particular use case in mind, but added the rule in order to keep the existing compiler(s?) with this behaviour compliant.
For what purpose could MSVC use this overhead? "why?"
These questions could confidently be answered only by the MS compiler team, but I can propose a few conjectures:
The space could be used by a debugger, which would allow it to show all of the elements of the array. It could be used by an address sanitiser to verify that the array isn't overflowed. That said, I believe both of these tools could store the data in an external structure.
Considering the overhead is only reserved in the case of non-trivial destructor, it might be that it is used to store the number of elements constructed so far, so that compiler can know which elements to destroy in the event of an exception in one of the constructors. Again, as far as I know, this could just as well be stored in a separate temporary object on the stack.
For what it's worth, the Itanium C++ ABI agrees that the overhead isn't needed:
No cookie is required if the
new operator
being used is::operator new[](size_t, void*)
.
Where cookie refers to the array length overhead.
The dynamic array allocation is implementation-specific. But ont of the common practices with implementing dynamic array allocation is storing its size before its beginning (I mean storing size before first element). This perfectly overlaps with:
representing array allocation overhead; the result of the new-expression will be offset by this amount from the value returned by operator new[].
"Placement delete" would not make much sense. What delete
does is call destructor and free memory. delete
calls destructor on all of the array elements and frees it. Calling destructor explicitly is in some sense "placement delete".
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With