Typically, I'm interested in knowing if the the standard template library incurs performance/speed overheads in codes for numerical/scientific computing.
For eg. Is declaring an array as
double 2dmatrix [10][10]
going to give me more performance than
std::vector<std::vector<double> > 2dmatrix(10,std::vector<double>(10,0.0))
?
I would also appreciate some general ideas, as to whether C has better performance than C++ for scientific computing. I have written my codes in a very Object oriented style using STL , and using C++11 a lot. I am beginning to consider if I should start looking into pure C, if its going to run faster.
Any thoughts on this are welcome.
A std::vector can never be faster than an array, as it has (a pointer to the first element of) an array as one of its data members. But the difference in run-time speed is slim and absent in any non-trivial program. One reason for this myth to persist, are examples that compare raw arrays with mis-used std::vectors.
That's about 3 - 4 times slower!
Vector is faster for insertion and deletion of elements at the end of the container. Set is faster for insertion and deletion of elements at the middle of the container.
So there is no surprise regarding std::vector. It uses 4 bytes to store each 4 byte elements.
The overhead of std::vector is:
A stack-allocated array might be faster in some cases (for small amounts of data). For this you can use std::array<T, Length>
.
If you need a 2-dimensional grid I would allocate the data in a single vector: std::vector<T>(width * height);
. Then you can write a number of helper functions to obtain elements by x and y coordinates. (Or you can write a wrapper class.)
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With