I'm trying to implement my own std::vector for sake of practice. Current source code:http://pastebin.com/bE4kjzcb
Here is an outline of my class:
Array()
allocate some memory using malloc()
push_back(const T &t)
add one element, call realloc()
if necessary.~Array()
call free()
to release memory.The major issue with this model is, free()
recycles the memory, but it doesn't call T
's destructor(when T
is a class rather than standard data type).
This could cause severe resource leak when elements inside vectors are objects. My solution to this problem is, call ~T()
EXPLICITLY before I free()
the memory.
The reason I'm using malloc()
is, I'm trying to utilize realloc()
. If I use new
and delete
, memory usage will peak when reallocing. (The moment when both the new buffer and old buffer are present.)
Q:
Is that a bad design? how does std::vector
resolve this problem? Is there other flaws in my vector class?
PS:Let's not talk about multithread performances of malloc()
now.
Calling ~T()
is exactly how std::vector
handles the problem.
You do however have a couple of problems:
Firstly, push_back
needs to use placement new to copy-construct the value into the vector. You can't just use assignment.
Secondly, you can't call realloc
- if the object have internal pointers, they are going to end up pointing outside them selves. You must call malloc
again, then use placement new to copy-construct the values, then explictly delete all the old values, then call free
to release the old value.
(Actually, std::vector
doesn't call ~T()
itself. Instead it calls the allocator which is responsible for ... allocating and deallocating memory. Internally though, that is how general purpose allocators do it.)
push_back(const T &t) add one element, call realloc() if necessary.
It is ok as long as T
is trivially copiable
, for example, try to push back double linked lists and after reallcoation take one and iterate backwards - the app will probably crash. the solution is to overload the function twice, one for types which are trivialy copiable, and one for objects that are not.
As opposed to others, I'm quite sorry that the standard containers do not use realloc
for objects which are trivially copiable. at least on Windows, realloc
first checks if the current block can hold the new size and if so - it just updates the heap entry, causing huge performance boost (no copying).
call ~T() EXPLICITLY before I free() the memory.
Yes, this is how the standard allocator does it. assume size
is the object count, you iterate each one and destruct it manually:
for (auto i=0U;i<size;i++){
data[i].~T();
}
Interestingly, C++17 will add std::destruct
which does exactly that.
Bonus:
Using new[]
and delete[]
will not help here. usually, dynamic arrays saves more space than what they need to achieve capacity ,the extra space is not filled with live objects, only junk.
new[]
will fill the entire memory with objects. capacity cannot be implemented this way. the array will move/copy the entire objects whenever someone pushes back new elements. so after 1000 push_back
's, there will be 1000 reallocations. we want amortized time of O(log (n))
.
even the standard allocator will call new(size_t)
or malloc
and not new[]
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With