Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

allocating ram shows double the ram usage in task manager

Doing some profiling (mem & speed) I've been stomped by the fact that win7 seems to be allocating exactly double the RAM I ask for... Note this is the first time I do such active profiling on win7, so I don't really know what to expect.

I'm allocating precise amounts of RAM in a loop using an express edition of MSVC under win7 (64-bit). The application is compiled and runs in 32 bit.

I allocate 24 MB of ram and the task manager shows my app as using 48MB (under all memory columns, including committed, since I'm actually memcopy'ing within the new regions). When I get 24 more (should now be 48MB), my app jumps to 96, etc.

These are allocated as 1,000,000 24 byte structs.

I've searched on the net but have found nothing to match my observations exactly.

Anyone have a clue?

If this is just OS trickery (or incompetence?), is there any tool which can give me the real memory consumption of a process? (its hard to find leaks, when the app gushes to begin with ;-)

[----------- edited, additional info -----------]

Note (by the path in the console title bar) that I am building in Release mode (using all default "empty" project settings of MSVC 2010), so there is no additional "debugging" memory being allocated (which can be quite extensive on some projects).

here is a short, complete C app which illustrates the behavior:

#include <stdio.h>
#include <assert.h>
#include <conio.h>
#include <stdlib.h>
typedef unsigned int u32;
typedef struct myStruct MYS;
struct myStruct {
    u32 type;
    union {
        u32 value; 
        char * str;
        void * data;
        MYS ** block;
        MYS * plug;
    };
    u32 state, msg,  count, index;
};
int main(int argc, char *argv[]){
    int i, j;
    MYS *ref;
    printf ("size of myStruct: %d\n\n", sizeof(MYS));
    for(i=0; i < 10; i ++){
        printf("allocating started...\n");
        for (j = 0; j < 1000000 ; j ++){
            ref = (MYS *) malloc(sizeof(MYS));
            assert(ref);
            memset(ref, 0, sizeof(MYS));
        }
        printf("   Done... Press 'enter' for Next Batch\n");
        _getch();
    }
    _getch();
    return 0;
}

and an image which shows the memory on my machine after one loop. Every other run, it adds ~48MB instead of 24MB!

process info after 1 loop (should be ~24MB)

like image 294
moliad Avatar asked Jul 11 '11 19:07

moliad


1 Answers

This is probably due to a combination of padding, internal housekeeping structures, and memory alignment restrictions.

When you invoke malloc(size), you don't actually get a buffer of size bytes. You get a buffer that is at least of size bytes. This is because, for efficiency reasons, your OS prefers to hand memory buffers in just a couple of different sizes and will not tailor buffers to save space. For instance, inf you ask for 24 bytes on Mac OS, you'll get a buffer of 32 bytes (a waste of 25%).

Add to that allocation overhead the structures your OS uses to manage malloced buffers (probably accounting for a few extra bytes per allocation), and the fact that padding may increase the size of your object (to a multiple of your compiler's preferred alignment), and you'll see that allocating millions of small objects into individual buffers is very expensive.

Long story short: allocate just one big buffer of sizeof (YourType) * 1000000 and you shouldn't see any noticeable overhead. Allocate a million of sizeof (YourType) objects, and you'll end up wasting a lot of space.

like image 189
zneak Avatar answered Sep 23 '22 08:09

zneak