Given a container v
with v.size() == 3
and v.capacity() == 5
, my understanding is that a call to v.shrink_to_fit()
can be fulfilled and, if it is, it causes v.capacity()
to become 3.
However, this comes at the cost of a reallocation.
Why? Isn't it possible to free the unused memory without reallocating a chunk of memory for what remains?
Probably this question has its roots into how more primitive commands like new
/delete
and malloc
/free
work.
So, in this way, we can use the vector::shrink_to_fit () function to shrink the capacity of the vector to its size. It means that we can release/free the extra unused allocated space for the vector and keep only the capacity up to its size, which is actually being used.
With shrink-to-fit=no, the page remains at the expected size, letting the content overflow the viewport. A user can (typically) still scroll or zoom out to see the overflow content but the initial viewport matches the device size. – davidjb Oct 11 '17 at 4:32
Now let’s understand the vector::shrink_to_fit () function. As the name suggests, this function asks the vector container to reduce its capacity to its size. In other words, the capacity will become equal to the number of elements actually present inside the vector container. The syntax of the vector::shrink_to_fit () function is given below:
The underlying memory management system defines what is possible, and typically, they don't allow to return parts of the allocated memory: if you got n
bytes, you either return n
bytes, or nothing.
Returning the last m
bytes (with m
< n
), or worse, returning m
bytes in the middle of the n
bytes, would of course be possible to offer, but consider the extra complexity needed to handle this correctly.
Of course, there could be some out there that do offer it, but your C++ compiler and the language definition doesn't necessarily know which ones run below it in the OS, so they have to accept the possibility that a reallocation is needed. Note that they don't guarantee that it will be needed - they just expect it.
The container does not allocate/deallocate the memory on itself, but it is it's allocator who does it.
For the (vector's) allocator to be able to deallocate the memory, it needs to be provided the exact the same pointer as the pointer to the memory it has allocated for the vector's data.
At this is the begin of the vector data and not the begin of the "not more used" data.
Basically, we are talking about this allocator's allocation/deallocation methods:
pointer allocate( size_type n, const void * hint = 0 );
void deallocate( T* p, std::size_t n );
The argument T* p
of the deallocate
will be the same as the pointer returned from allocate
( == begin of vector's data). This is what the vector's implementation will pass to the deallocate.
It is surely imaginable to have a custom vector implementation, who would be able to pass any pointer in range [data, data+size]
to allocators deallocate method. One could construct such an allocator to be able to deal with it. But then all other allocators would need to conform to this API, also the standard allocator.
Then something like this would need to be able to "work":
int* p = new int[100];
delete [] (p + 50); // imagine making this work
This would add extra complexity, performance and other issues.
In fact, the standards just allows the allocator implementation to re-allocate when shrinking if it wants to. I do not know whether it is common in C++ library implementations, but I can remember some databases allocators that used different pools of disk segments, some for smaller blocks, and some for larger blocks. The rationale was that there were many small blocks and few large ones, so that segregation helped in reducing the fragmentation for the larger blocks pools. If this happens in a program, it could make sense to define a custom allocator that implements that rule.
The fact that the standard says that shrink_to_fit
may reallocate, just allows the use of such a custom allocator.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With