I am comparing performance of an STL (g++) priority_queue and found that push and pop are not as fast as I would expect. See the following code:
#include <set>
#include <queue>
using namespace std;
typedef multiset<int> IntSet;
void testMap()
{
srand( 0 );
IntSet iSet;
for ( size_t i = 0; i < 1000; ++i )
{
iSet.insert(rand());
}
for ( size_t i = 0; i < 100000; ++i )
{
int v = *(iSet.begin());
iSet.erase( iSet.begin() );
v = rand();
iSet.insert(v);
}
}
typedef priority_queue<int> IntQueue;
void testPriorityQueue()
{
srand(0);
IntQueue q;
for ( size_t i = 0; i < 1000; ++i )
{
q.push(rand());
}
for ( size_t i = 0; i < 100000; ++i )
{
int v = q.top();
q.pop();
v = rand();
q.push(v);
}
}
int main(int,char**)
{
testMap();
testPriorityQueue();
}
I compiled this -O3 and then ran valgrind --tool=callgrind, KCachegrind testMap takes 54% of total CPU testPriorityQueue takes 44% of CPU
(Without -O3 testMap is a lot faster than testPriorityQueue) The function that seems to take most of the time for testPriorityQueue is called
void std::__adjust_heap<__gbe_cxx::__normal_iterator<int*, std::vector<int, std::allocator<int> > >, long, int, std::less<int> >
That function seems to be called from the pop() call.
What does this function do exactly? Is there a way to avoid it by using a different container or allocator?
The priority queue is implemented as a heap: this has to be "rebalanced" every time you remove the head element. In the linked description, delete-min
is an O(log n)
operation, really because the min
(or head) element is the root of the flattened binary tree.
The set is usually implemented as a red-black tree, and the min element will be the leftmost node (so either a leaf, or having at most a right child). Therefore it has at most 1 child to be moved, and rebalancing can be amortized over multiple pop
calls, based on the allowable degree of un-balanced-ness.
Note that if the heap has any advantage, it's likely to be in locality-of-reference (since it is contiguous rather than node-based). This is exactly the sort of advantage that may be harder for callgrind to measure accurately, so I'd suggest running some elapsed-real-time benchmark as well before accepting this result.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With