I wrote a tiny program to get a rough estimate of the heap size (I know I could probably have just googled it, but this seemed more fun and I thought it would be a quick simple thing).
This is the whole program:
#include <stdio.h>
#include <stdlib.h>
void main()
{
unsigned long long alloc_size = 1024;
unsigned long long total = 0;
unsigned int i = 0;
void* p = 0;
while(alloc_size >= 16) {
p = malloc(alloc_size);
if (p) {
total += alloc_size;
i++;
printf("%u)\tAllocated %llu bytes\tTotal so far %llu\n", i, alloc_size, total);
alloc_size *= 2;
}
else {
alloc_size /= 2;
}
}
printf("Total alloctions: %llu bytes in %u allocations", total, i);
}
I ran this, and was surprised by 2 things:
42) Allocated 65536 bytes Total so far 28337044480
43) Allocated 16384 bytes Total so far 28337060864
44) Allocated 16384 bytes Total so far 28337077248
45) Allocated 16384 bytes Total so far 28337093632
46) Allocated 16384 bytes Total so far 28337110016
47) Allocated 32768 bytes Total so far 28337142784
48) Allocated 8192 bytes Total so far 28337150976
49) Allocated 8192 bytes Total so far 28337159168
50) Allocated 16384 bytes Total so far 28337175552
51) Allocated 32768 bytes Total so far 28337208320
52) Allocated 65536 bytes Total so far 28337273856
53) Allocated 131072 bytes Total so far 28337404928
54) Allocated 262144 bytes Total so far 28337667072
55) Allocated 16384 bytes Total so far 28337683456
56) Allocated 8192 bytes Total so far 28337691648
57) Allocated 4096 bytes Total so far 28337695744
As you can see, this is after it already reached the peak allocation size and is on it's way down. Between allocations #42 and #43, it would have tried to allocate 32768 bytes and failed. Same after each of allocations #43 - #45. So how did it suddenly manage it at #47? You can see the same thing at #50 - #54. And this is only a sample. This same behavior happened multiple times in the total of 272 allocations in this particular run.
I hope this was not to long, but I am really confused and would be glad for any light shed on this.
EDIT
I should add that this is on a Win7 64bit machine with mingw-w64 64bit gcc compiler, no compiler flags
On Windows, a memory allocation (commit) will succeed if there is enough paging file space to back it. The amount of space available in the paging file can change due to actions by other processes reserving and releasing memory.
On Windows, malloc is a thin wrapper around HeapAlloc, which is built on VirtualAlloc.
VirtualAlloc will allow you to reserve and/or commit memory.
Reserving means setting aside a range of the process's virtual address space. A reservation succeeds if the range of virtual addresses is available. It doesn't mean that memory is available, just that those addresses won't be used for anything else.
Committing means setting aside spacing in the paging file. A commit succeeds if the paging file has enough free space to back the commit.
Accessing reserved but not-yet-committed memory is an error and will probably crash.
Accessing committed memory may trigger page faults, which the system handles by mapping those pages to actual RAM. If necessary, some RAM will be copied out to the paging file (which is guaranteed to have enough space because the commit succeeded), and then that RAM is re-purposed for the access that originally triggered the page fault. Thus accessing committed memory always succeeds.
Windows Heaps use VirtualAlloc (and related functions) under the hood. Internally, a heap starts with an initial reservation, only part of which is committed. A HeapAlloc can fail if: (1) it exhausts its initial reservation or (2) it can't commit more pages because there isn't enough free space in the paging file. When it succeeds, HeapAlloc returns pointers to committed memory, so accessing memory from a successful HeapAlloc always succeeds.
Since malloc is just a thin wrapper on HeapAlloc, the preceding paragraph applies to malloc as well.
There's an extra wrinkle in that HeapAlloc handles large allocations separately. I'm not sure if these come from its initial reservation or if they're distinct blocks.
The behavior of baruch's program suggests that it's running into the limit of available memory in the paging file. The available space in the paging file normally varies as other processes go about their business, so it's not surprising to seem some allocations fail and others succeed as you're near this limit.
Also, the system might adjust the size of the paging file based on perceived demand, policy, and available disk space. So if you hit the limit once, but try again later, you might find that the limit is higher.
When you call malloc
, what actually happens is that you're allocating virtual memory. Once you try to access that memory, it triggers the OS (kernel) to allocate the actual, physical memory and map the actual memory address onto that virtual address (your pointer).
Most, if not all, modern systems will gladly return a pointer even though the memory is, at the time you call malloc
not available. Or at least: not available in its required form (ie as a contiguous block of memory).
malloc
has to return a pointer to a contiguous block of memory (if not, pointer arithmetic wouldn't be possible). So say you have 2gigs of memory available, it's very likely that that memory won't be available as a contiguous block, whereas allocating 2 gigs in chunks will work perfectly fine. This can, in part, explain why you see memory being allocated in larger chunks, then all of a sudden, you're allocating the same amount of memory as 2 chunks (because of your dividing the size by 2).
Now, at no particular point are you actually writing to that memory. What you need to understand is that, unless you access this allocated virtual memory, it needn't be mapped to physical memory. Some systems, like Linux, will not allocate physical memory until you actually access the malloc
'ed block. When this happens, it triggers the kernel to actually go in and allocate physical memory. That physical memory address is than mapped onto the virtual address. This is called optimistic allocation.
Because most (if not all) modern system use optimistic allocation, they can allow processes to allocate more memory than is actually available at the time. What matters is that this memory is available by the time the process is actually starting to use it. Something you don't do.
When you call free
on malloc
'ed memory that you've never used, all that you actually release is virtual memory. The physical memory remains untouched. If you have written to heap memory, and then call free
, it's likely that block of memory is not always returned to the system directly (that's not really the job of free
). Next time you call malloc
, it's very possible that the newly allocated memory will be taken from the block you just free
'd. But these are implementation details that would take us too far.
In cases like this, I like to use silly analogies using real-world situations:
malloc
isn't like planting a flag in a chunk of memory so that nothing else, in any way shape or form is allowed to access that memory. It's more like booking a reservation in a restaurant. Restaurants often over-book, because they expect people not to show up, or to cancel. 99.99% of the time, when you arrive at the restaurant, there will be a table available for you. (the restaurant allocated the table optimistically).
If you don't show up (ie you don't access the memory), the table is available for other people to use. If you did arrive, sat down and ate your dinner (write to memory), you free
the table by paying the bill and leaving. What the restaurant owner does with the table is not your concern. They might have an outstanding reservation for someone else who will arrive 10 minutes after you left. If so, the restauranteurs allocated the same resource twice, but that didn't cause problems simply because the resource was free
'd by the time it was needed elsewhere.
This elaborate analogy is, essentially, how and why optimistic allocation works.
Since the Operating System has the responsibility to manage the available memory for the user, you cannot expect a linear behaviour of the quantity of memory available at the moment you call your malloc. Between two steps, OS can free an amount of memory doing a swap to the Hard Disk or to another area of the RAM Memory, even if you don't call free explcitly. That's why you expect a failure but malloc is executed with success.
Short answer: OS can free the available memory even if you do not call it.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With