Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Why would a VC++ program that is storing 5MB of data consume 64MB of system memory?

I have been working on trying to figure out why my program is consuming so much system RAM. I'm loading a file from disk into a vector of structs of several dynamically allocated arrays. A 16MB file ends up consuming 280MB of system RAM according to task manager. The types in the file are mostly chars with some shorts and a few longs. There are 331,000 records in the file containing on average about 5 fields. I converted the vector to a struct and that reduced the memory to about 255MB but that still seems very high. With the vector taking up so much memory the program is running out of memory so I need to find a way to get the memory usage more reasonable.

I wrote a simple program to just stuff a vector (or array) with 1,000,000 char pointers. I would expect it to allocate 4+1 bytes for each giving 5MB of memory required for storage, but in fact it is using 64MB (array version) or 67MB (vector version). When the program first starts up it only consumes 400K so why is there an additional 59MB for array or 62MB for vectors being allocated? This extra memory seems to be for each container, so if I create a size_check2 and copy everything and run it the program uses up 135MB for 10MB worth of pointers and data.

Thanks in advance,

size_check.h

#pragma once

#include <vector>

class size_check
{
public:
    size_check(void);
    ~size_check(void);

    typedef unsigned long   size_type;

    void stuff_me( unsigned int howMany );

private:
    size_type**         package;
//  std::vector<size_type*> package;
    size_type*          me;
};

size_check.cpp

#include "size_check.h"

size_check::size_check(void)
{
}

size_check::~size_check(void)
{
}

void size_check::stuff_me( unsigned int howMany )
{
    package = new size_type*[howMany];
    for( unsigned int i = 0; i < howMany; ++i )
    {

        size_type *me = new size_type;
        *me = 33;
        package[i] = me;
//      package.push_back( me );
    }
}

main.cpp #include "size_check.h"

int main( int argc, char * argv[ ] )
{
    const unsigned int buckets = 20;
    const unsigned int size = 50000;

    size_check* me[buckets];

    for( unsigned int i = 0; i < buckets; ++i )
    {
        me[i] = new size_check();
        me[i]->stuff_me( size );
    }
    printf( "done.\n" );
}
like image 315
Mark Avatar asked Jun 11 '11 02:06

Mark


1 Answers

In my test using VS2010, a debug build had a working set size of 52,500KB. But a release build had a working set size of 20,944KB.

Debug builds will usually use more memory than optimized builds due to the debug heap manager doing things like creating memory fences.

In release builds, I suspect that the heap manager reserves more memory than you are actually using as a performance optimization.

like image 106
sean e Avatar answered Nov 15 '22 11:11

sean e