Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

How does memory on the heap get exhausted?

I have been testing out some of my own code to see how much allocated memory it takes to exhaust the memory on the heap or free store. However, unless my code is wrong in the testing of it, I am getting completely different results in terms of how much memory can be put on the heap.

I am testing two different programs. The first program creates vector objects on the heap. The second program creates integer objects on the heap.

Here is my code:

#include <vector>
#include <stdio.h>

int main()
{
    long long unsigned bytes = 0;
    unsigned megabytes = 0;

    for (long long unsigned i = 0; ; i++) {

        std::vector<int>* pt1 = new std::vector<int>(100000,10);

        bytes += sizeof(*pt1);
        bytes += pt1->size() * sizeof(pt1->at(0));
        megabytes = bytes / 1000000;

        if (i >= 1000 && i % 1000 == 0) {
            printf("There are %d megabytes on the heap\n", megabytes);
        }

    }
}

The final output of this code before getting a bad_alloc error is: "There are 2000 megabytes on the heap"

In the second program:

#include <stdio.h>

int main()
{
        long long unsigned bytes = 0;
        unsigned megabytes = 0;

        for (long long unsigned i = 0; ; i++) {

           int* pt1 = new int(10);

           bytes += sizeof(*pt1);
           megabytes = bytes / 1000000;

           if (i >= 100000 && i % 100000 == 0) {
              printf("There are %d megabytes on the heap\n", megabytes);
        }

    }
}

The final output of this code before getting a bad_alloc error is: "There are 511 megabytes on the heap"

The final output in both programs is vastly different. Am I misunderstanding something about the free store? I thought that both results would be about the same.

like image 867
Kyle C Avatar asked Oct 13 '19 06:10

Kyle C


1 Answers

It is very likely that pointers returned by new on your platform are 16-byte aligned.

If int is 4 bytes, this means that for every new int(10) you're getting four bytes and making 12 bytes unusable.

This alone would explain the difference between getting 500MB of usable space from small allocations and 2000MB from large ones.

On top of that, there's overhead of keeping track of allocated blocks (at a minimum, of their size and whether they're free or in use). That is very much specific to your system's memory allocator but also incurs per-allocation overhead. See "What is a Chunk" in https://sourceware.org/glibc/wiki/MallocInternals for an explanation of glibc's allocator.

like image 100
NPE Avatar answered Oct 13 '22 04:10

NPE