I have the following working codes (g++ 8.2 , the C++17 standard.)
queue<TreeNode*> q{};
q.push(root);
q.push(nullptr);
int sum = root -> val;
while (!q.empty()) {
TreeNode *n = q.front();
q.pop();
if (n != nullptr) {
sum += n->val;
if (n-> left != nullptr) q.push(n->left);
if (n-> right != nullptr) q.push(n->right);
} else {
if (q.empty()) break;
q.push(nullptr);
sum = 0;
}
}
return sum;
Then I replaced queue<TreeNode*> with deque<TreeNode*>. It turned out the speed improved at least 20%. Why is deque<TreeNode*> so much faster than queue<TreeNode*>?
From cppreference.com: std::queue
template<
class T,
class Container = std::deque<T>
> class queue;
Container- The type of the underlying container to use to store the elements.
So std::queue - by default - uses std::deque as its internal container, due to that it can only be at best as fast as std::deque (or the underlying container in general), and because being a wrapper it is - depending on the optimization the compiler can do - slower. So if you correctly measure a 20% slowdown, then you measure how good the compiler is in optimizing the wrapper code away at the given optimization level.
As std::queue exists to guarantee a FIFO usage, by only exposing these functions, I doubt - but I can't test it right now - that the slow down is that drastically with optimizations turned on. If it is I would consider that as a bug/defect of the capabilities of the compiler or library implementation.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With