I googled quite a while in order to find out a comparison that shows the differences in complexity for all STL-Containers on insert/push erase/pop etc. I did not find any. Also not in all of my STL Books. Any hint?
I know some rules of thumb of course. But where is a definition?
Try with
http://www.sgi.com/tech/stl/complexity.html
http://www.sgi.com/tech/stl/table_of_contents.html
From complexity.html:
Fundamentally, it is difficult to define the notion of asymptotic algorithm complexity precisely for real computer hardware instead of an abstract machine model. Thus we settle for the following guidelines:
- For an algorithm A to have running time O(f(n)), there must be a corresponding algorithm A' that is correct on machines with arbitrarily long pointer and size_t types, such that A and A' perform essentially the same sequence of operations on the actual hardware. (In simple cases A and A' will be the same. In other cases A may have been simplified with the knowledge that adresses are bounded.) For inputs of sufficiently large size n, A' must take at most time Cf(n), where C is a constant, independent of both n and the address size. (Pointer, size_t, and ptrdiff_t operations are presumed to take constant time independent of their size.)
- All container or iterator complexity specifications refer to amortized complexity. An individual operation may take longer than specified. But any sufficiently long sequence of operations on the same container or iterator will take at most as long as the corresponding sum of the specified operation costs.
- Algorithms specify either worst-case or average case performance, and identify which. Unless otherwise stated, averages assume that container elements are chosen from a finite type with more possible values than the size of the container, and that container elements are independently uniformly distributed.
- A complexity specification for an operation f assumes that operations invoked by f require at most the specified runtime. But algorithms generally remain appropriate if the invoked operations are no more than a logarithmic factor slower than specified in the expected case.
If operations are more expensive than assumed by a function F in the current STL, then F will slow down at most in proportion to the added cost. Any future operations that fail to satisfy this property will make that explicit.
To make this precise, assume F is specified to use time f(m) for input of size m. F uses operations Gk, with specified running times gk(n) on input size n. If F is used in a context in which each Gk is slower than expected by at most a factor h(n), then F slows down by at most a factor h(m). This holds because none of the current algorithms ever apply the operations Gk to inputs significantly larger than m.
ISO C++ Standard defines the complexity of algorithms and container access methods, where required. Anywhere else (if not explicitly stated) all bets are off and a library implementor is free to do their own implementation.
For example you can assume that maps and sets are implemented using red-black trees, but there is no requirement to do so. Many algorithms are overloaded or specialized for particular iterator categories (like Random Access Iterator) and sometimes might be even optimized to perform from special hardware (like XMM register and execute some some operations in parallel).
Sometimes you really have to logically assume how much an operation might cost, due to other requirements, like splice in std::list must have O(1) complexity => length has O(n).
I have the book from N. Josuttis C++ Standard Library - Tutorial And Reference and all these aspects are well covered there.
Best Regards,
Ovanes
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With