In the following code which one is more efficient calling resize or erase ?
vector<int> a(5000);
//....
vector<int>::iterator it = remove(a.begin(),a.end(),8)
a.resize( std::distance(a.begin(),it));
//or
a.erase(it,a.end());
I think it depends on number of duplicate elements right ?
Erasing an element in a vector is O(n) as we have to remove the element and still need to shift all successive elements to fill the gap created. If a vector has n elements, then in the worst case we will need to shift n-1 elemets, hence the complexity is O(n).
vector : : resize() in C++ STL Vectors are known as dynamic arrays which can change its size automatically when an element is inserted or deleted. This storage is maintained by container.
Yes. vector::erase destroys the removed object, which involves calling its destructor.
vector::eraseErases the specified elements from the container. 1) Removes the element at pos. 2) Removes the elements in the range [first, last) . Invalidates iterators and references at or after the point of the erase, including the end() iterator.
The number of duplicates being equal, they'd have equivalent complexity. When shrinking the vector, resize
is defined in terms of erase
:
n3337, 23.3.6.3 says:
void resize(size_type sz);
9 Effects: If
sz <= size()
, equivalent toerase(begin() + sz, end())
;. [...]
What does the profiler say? This can clearly vary from one implementation to another (although only by a constant factor—the complexity is required to be the same).
For that matter: has the profiler shown you are loosing too much time here? The idiomatic way of writing this is:
a.erase( std::remove( a.begin(), a.end(), 8 ), a.end() );
Unless the profiler clearly says that this is a bottleneck, you should write it in the idiomatic way, so that C++ programmers recognize immediately what is happening, and don't waste time recognizing that you're doing the same thing, and wondering why you didn't do it in the idiomatic way.
"I think it depends on number of duplicate elements right ?"
Nope. There's no destructor for int
so 1 duplicate or 1000 makes no difference. All either function needs to do is set an internal record of the new end of the in-use elements. Consequently, the performance of remove()
is the costly thing here, not the resize
/erase
. (And even if there was a destructor, they'd loop over the same number of elements calling it, taking almost exactly the same time).
You can almost always trust any seasoned Standard Library implementation not to do something stupid and take far longer than necessary, so given the insight that the behaviours are equivalent - per jrok's answer - there's no reason to investigate further unless your profiler's telling you to.
iter != v.end()
where iterators are implemented as pointers without slower begin+size arithmetic calculations for end()
, nor equally ugly special casing so incrementing an end-1 iterator produces some sentinel state.If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With