I understand why this causes a segfault:
#include <algorithm>
#include <vector>
using namespace std;
int main()
{
vector<int> v;
int iArr[5] = {1, 2, 3, 4, 5};
int *p = iArr;
copy(p, p+5, v.begin());
return 0;
}
But why does this not cause a segfault?
#include <algorithm>
#include <vector>
using namespace std;
int main()
{
vector<int> v;
int iArr[5] = {1, 2, 3, 4, 5};
int *p = iArr;
v.reserve(1);
copy(p, p+5, v.begin());
return 0;
}
The conclusion is that arrays of integers are faster than vectors of integers (5 times in my example). However, arrays and vectors are arround the same speed for more complex / not aligned data.
In vector, each element only requires the space for itself only. In list, each element requires extra space for the node which holds the element, including pointers to the next and previous elements in the list.
The unique() function in C++ helps remove all the consecutive duplicate elements from the array or vector. This function cannot resize the vector after removing the duplicates, so we will need to resize our vector once the duplicates are removed. This function is available in the <algorithm.
Both are wrong as you are copying to empty vector and copy requires that you have space for insertion. It does not resize container by itself. What you probably need here is back_insert_iterator and back_inserter:
copy(p, p+5, back_inserter(v));
This is undefined behavior - reserve()
allocates a buffer for at least one element and the element is left uninitialized.
So either the buffer is big enough and so you technically can access elements beyond the first one or it is not big enough and you just happen to not observe any problems.
The bottom line is - don't do it. Only access elements that are legally stored in the vector
instance.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With