Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Why doesn't std::queue shrink its memory after popping elements?

Tags:

c++

queue

I wrote a small program that uses std::queue

queue<double> the_queue;

for(int i = 0; i < 100; i++)
{
    for(int j = 0; j < 5000; j++)
    {
        double d = 1.0 * (rand() % 1000);
        the_queue.push(d);
    }
}

printf("Done pushing\n");

while(!the_queue.empty())
{
    the_queue.pop();
}

printf("Done popping\n");

I put 2 breakpoints at printf("Done pushing\n"); and printf("Done popping\n"); and check program's memory usage (showed in Task Manager) when the breakpoints are hit. At Done pushing, the memory usage is ~ 34 MBs, but at Done popping the memory usage is still ~ 34 MBs. That surprises me!

Why is this? Is there any way to overcome this?

like image 858
duong_dajgja Avatar asked Jul 05 '16 08:07

duong_dajgja


People also ask

Does queue pop delete the object?

"pop() removes the object from the underlying container, hence calls its destructor (if any)".

Does pop of queue return C++?

Since it is impossible for pop() to return a value in such a way as to be both efficient and correct, it is more sensible for it to return no value at all and to require clients to use front() to inspect the value at the front of the queue.

What does queue pop do in C++?

C++ Queue Library - pop() Function The C++ function std::queue::pop() removes front element of the queue and reduces size of the queue by one. This member function effectively calls the pop_front member function of the underlying container.

What is the time complexity when calling Push_back on a std :: queue?

Both push() (which internally calls push_back() on the underlying container) and pop() (which internally calls pop_front() on the underlying container) provided by std::queue in C++ STL have a constant O(1) time complexity since they only insert at the end of queue or pop from the front of it.


3 Answers

Basically std::queue is an Adapter Container - it is not a container by its own, but a thin wrapper around other container.

For example, lets take a look at the queue signature:

template <class T, class Container = deque<T> > class queue;

as you can see, T is the type of the element stored in the queue, and Container is the underlying container.

and this is the answer to your question: different containers handles memory differently. the underlying deque may or may not shrink, but it is up to the deque inside to decide.

you can use std::list as your underlying container as well. in this case, each pop deletes the underlying list node memory.

you can also write your own or modify existing container to match your own memory-management patterns. your container needs to support some methods (such as push_back, pop_front) which you can read in the relevant online documentation.

Here is an example to a deque adapter which shrinks in capacity every 1024 pop calls:

template<class T>
class DequeAdapter{
    
private:
    std::deque<T> m_Deque;
    size_t m_PopCount;

public:
    DequeAdapter():
        m_PopCount(0){}
    
    bool empty() const noexcept{
        return m_Deque.empty();
    }
    
    size_t size() const noexcept{
        return m_Deque.size();
    }
    
    T& front() noexcept{
        return m_Deque.front();
    }
    
    const T& front()const noexcept{
        return m_Deque.front();
    }
    
    T& back() noexcept{
        return m_Deque.back();
    }
    
    const T& back()const noexcept{
        return m_Deque.back();
    }
    
    void push_back(const T& t){
        return m_Deque.push_back(t);
    }
    
    void push_back(T&& t){
        return m_Deque.push_back(std::move(t));
    }
    
    void pop_front(){
        m_Deque.pop_front();
        m_PopCount++;
        if (m_PopCount%1024U == 0U){
            m_Deque.shrink_to_fit();
        }
    }

}


template <class T>
using LeanQueue = std::queue<T,DequeAdapter<T>>;

Do note however, that shrinking in capacity means moving or copying the queue elements to the new lean chunk, the memory consumption will be smaller, but the performance may degrade.

like image 187
David Haim Avatar answered Nov 15 '22 16:11

David Haim


Any memory the queue manages will be released when the queue goes out of scope.

However, even then memory may not be released back to the OS because the standard library makes the assumption that if you used the memory before, you may well need it again.

The specifics of this are taken care of in malloc/free in the specific c runtime library your program is linked against.

Is this an embedded system where memory is tight? (in which case perhaps consider fixed-size containers), or is it running on a server/desktop/ipad? (in which case tell your boss to stop worrying about things he doesn't understand).

like image 29
Richard Hodges Avatar answered Nov 15 '22 17:11

Richard Hodges


Allocating and deallocating memory could become quite an overhead with a container that is regularly used over and over. For this reason containers are allowed to grow to whatever extent they are used. If you want to reclaim the memory you have to do it explicitly. Often that involved allowing the container to go out of scope because the Standard (for some reason) does not give a function to release the memory. There is a function to maybe/maybe not release unused memory on some containers (shrink_to_fit()) but its no guarantee.

There is, thankfully, an idiom that is used to reduce a container to its starting amount of memory called the swap and reset.

You basically create a new empty container and swap() its (empty) contents with your working container.

// create a temporary empty container and swap its
// contents with the working container
std::queue<double>().swap(q);
like image 31
Galik Avatar answered Nov 15 '22 16:11

Galik