Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Why does this std::queue/list of pointers to structs not free memory until List.Size() == 0?

Tags:

c++

opencv

I have a program that feeds pointers to structs containing cv::mat clones into a std::queue/std::list (tried both) that other threads consume.

Reading/creation is fast, but consuming is slow, so the queue grows in size over time.
This queue get huge, easily taking up >50% system memory.

When the reading/creation stops, the queue shrinks, but the memory doesn't!
When the queue size finally hits 0, the memory disappears almost instantly. The queue.size() == 0 trigger can be confirmed by making sure the queue never pops the last pointer--the memory will not disappear. *Note: The queue still exists, it has not gone out of scope, which is static.

So I have two questions:

  1. Why does the memory disappear when queue hits size zero? Or in other words, why doesn't the memory disappear as the pointers are consumed/deleted?

  2. How can the memory be released explicitly?


The code is something like this:

struct MyStruct {
    cv::mat matA;
    ~MyStruct(){
       cout << "Yeah, destructor is called!" << endl;
       //matA.release(); //Not needed, right? Adding it does not change anything.
    }
};

static queue<shared_ptr<MyStruct>> awaitingProcessing;
static mutex queueGuard;

Thread 1 (queue filler):

BYTE* buffer = (BYTE*)malloc(dataWidth*dataHeight);
while(signal){
    LoadData(buffer);
    cv::Mat data = cv::Mat(dataHeight, dataWidth, CV_8U, buffer);
    
    auto addable = shared_ptr<MyStruct>(new MyStruct())>;
    addable->matA = data.clone();
    lock_guard<mutex> guard(queueGuard);
    awaitingProcessing.push(addable);

}

Thread 2 (consumer):

    shared_ptr<MyStruct> pullFromQueue(){
        lock_guard<mutex> guard(queueGuard);
        if (awaitingProcessing.size() > 0){
            auto returnable = awaitingProcessing.front();
            awaitingProcessing.pop();
            return returnable;
        }
        return nullptr;
    }

    void threadLogic(){
        while (!interruptFlag){
            auto ptr = pullFromQueue();
            if (ptr == nullptr){
                usleep(5);
            }
            else{
                doSomething(ptr);
            }
            // ptr destructor called here, as refcount properly hits zero
        } 

    }


If I recall correctly, std data collections often don't release their memory and hold it as reserve, in case the size grows again. However, this collection (queue and/or list) consists of pointers, so even if the queue gets large, the memory footprint should be small.

I'm not familiar with OpenCV's memory management, but it seems to be doing something similar. Pausing the queue filling allows the queue size to shrink, but the memory doesn't shrink. Then, resuming filling increases the queue size, without increasing memory size.


To summarize a few key points:

  • Memory does get released without scope changing (ie not a memory leak)
  • Memory ONLY releases when the queue size hits zero. It will not release if the queue size stays forever at 1
  • The structs get destructed
  • The structs contain cloned cv::mats (I think this is the key point)
  • The list/queue only contains pointers and should thus be small
like image 557
Mars Avatar asked Jul 01 '20 06:07

Mars


1 Answers

std::queue uses a std::deque as the internal container by default. When memory is deallocated is implementation defined to a large extent(and which could be the case when the size hits zero), but std::deque and std::vector does have a function for freeing excess memory namely shrink_to_fit (c++11 feature). This is not available from the std::queue interface, but can be done with inheritance (the container in queue is protected).

psudocode:

template<class T>
struct shrinkable_queue : public std::queue<T> {
  void shrink_to_fit() {c.shrink_to_fit();}
};

You could also use a std::queue<T, std::list<T>>. I did check the MSVC implementation since you said you tried list as well and at least on my version it seems that it deallocates the memory even when removing single nodes (as expected).

like image 68
darune Avatar answered Sep 17 '22 16:09

darune