I'd like to use std::forward_list
Because:
Forward list is a container which supports fast insertion and removal of elements from anywhere from the container
But there's no *std::forward_list::push_back* implementation.
Is there a high-performance way to add support for the one or no reason to do it?
std::forward_list is a container that supports fast insertion and removal of elements from anywhere in the container. Fast random access is not supported. It is implemented as a singly-linked list.
To get the size of forward lists, one can use std::distance() function. Approach: Since std::distance() function takes two iterators as arguments and it returns an integer, the std::begin() and std::end() function can be passed which points to the address of the first item and the address just after the last item.
Forward lists are sequence containers that allow constant time insert and erase operations anywhere within the sequence. Forward lists are implemented as singly-linked lists; Singly linked lists can store each of the elements they contain in different and unrelated storage locations.
I recommend against std::forward_list
just like I recommend against std::list
in almost all situations. Personally, I've never found a situation in my code where a linked list was the best data structure.
In C++, your default go-to data collection should be std::vector
. It gives you efficient push_back
, if that is what you really need. It technically does not give you efficient deletion and insertion from the middle if you only look at abstract big-O complexity measurements of that one operation. In the real world, however, std::vector
still wins even for inserting and deleting in the middle.
As an example, Bjarne Stroustrup created a test of a 100,000 element std::list
vs. std::vector
. He would search each for an element and delete it. Then he would find an insertion point and insert into the middle. He could have use a binary search on the std::vector
, but did not to make the comparison 'more fair'.
The results show a strong win for std::vector
, even in this situation where std::list
is supposed to be strong. Simply traversing the std::list
takes so much longer because of how far apart in memory all of the objects are. std::list
is not cache-friendly, which is possibly the most important thing for modern processors.
The complete talk by Bjarne Stroustrup
Thorough explanation of the effects, with benchmarks at multiple sizes
Note that this second link here gives some situations of where you may possibly want to use a std::list
, such as when the size of the elements is large. However, I've been in a situation where I have many elements in a particular order and needed to delete some.
These elements were larger than any built-in type, but not huge, perhaps 20-30 bytes each on a 32-bit computer). The number of elements was large enough so that my entire data structure was a few hundred MiB. The data collection was a set of values that could theoretically be a valid based on currently known information. The algorithm iterated over all elements and removed elements that could no longer be valid based on new information, with each pass probably deleting somewhere around 80% of the remaining elements.
My first implementation was a straightforward std::vector
approach where I deleted invalid elements as I traversed. This worked for small test data sets, but when I tried to do the real thing, it was too slow to be useful. I switched to a std::list
as the container, but used the same algorithm, and I saw significant performance improvements. However, it was still too slow to be useful. The winning change was to switch back to a std::vector
, but instead of deleting elements in place that were bad, I created a new std::vector
, and any elements I found that were good were put into that std::vector
, and then at the end of the function I would simply discard the old std::vector
and use the new one, and this gave me about as much of a speed up over the std::list
as the std::list
gave me over my original std::vector
implementation, and this was just fast enough to be useful.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With