I have a code where I routinely fill a vector with between 0 and 5000 elements. I know the maximum never exceeds 5000. Instead of initializing vector multiple times, I would like to do just once
vector<struct> myvector; myvector.reserve(5000);
However, to fill the vector again, I have to clear the vector first without altering its capacity. So usually I call myvector.clear();
This is a O(n) operation. Is there something simple I can do to increase the performance of this or is this about the best that it will get?
The C++ function std::vector::clear() destroys the vector by removing all elements from the vector and sets size of vector to zero.
vector::clear() clear() removes all elements from vector and reducing it to size 0. erase() is used to remove specific elements from vector.
Vector erase () in C++ The function erase () removes or erases one or multiple elements from the vector. To remove one element from the vector we pass as the argument the position of the element we want to erase.
clear () function is used to remove all the elements of the vector container, thus making it size 0. vectorname.clear () Parameters : No parameters are passed. Result : All the elements of the vector are removed ( or destroyed ) 1. It has a no exception throw guarantee.
Because the vector was declared constant, the clear () function could not operate, resulting in an error message from the compiler. Note: clear () deletes all the elements of the vector. Actually, it earmarks all the elements as deleted, such that other codes can take up their memory locations.
To remove multiple elements from the vector we pass as argument the initial and the final position of the range of elements which have to be erased. That is, we pass as argument the position from where we want to start erasing and the final position up to where we want to erase the elements.
If your struct has a non-trivial destructor, then that needs to be called for all the elements of the vector regardless of how it is emptied. If your struct only has a trivial destructor, the compiler or the standard library implementation is allowed to optimize away the destruction process and give you a O(1) operation.
The cost of clear()
depends greately on what the stored objects are, and in particular whether they have a trivial destructor. If the type does not have a trivial destructor, then the call must destroy all stored objects and it is in fact an O(n) operation, but you cannot really do anything better.
Now, if the stored elements have trivial destructors, then the implementation can optimize the cost away and clear()
becomes a cheap O(1) operation (just resetting the size --end
pointer).
Remember that to understand asymptotic complexity you need to know what it talks about. In the case of clear()
it represents the number of destructors called, but if the cost (hidden) is 0, then the operation is a no-op.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With