I'm building a photo book layout application. The application frequently decompresses JPEG images into in-memory bitmap buffers. The size of the images is constrained to 100 megapixels (while they usually do not exceed 15 megapixels).
Sometimes memory allocations for these buffers fail: [[NSMutableData alloc] initWithLength:]
returns nil
. This seems to happen in situations where the systems's free physical memory approaches zero.
My understanding of the virtual memory system in Mac OS X was that an allocation in a 64 bit process virtually (sic) can't fail. There are 16 exabyte of address space of which I'm trying to allocate a maximum of 400 megabytes at a time. Theoretically I could allocate 40 billion of these buffers without hitting the hard limit of the available address space. Of course practical limits would prevent this scenario as swap space is constrained by the boot volume's size. In reality I'm only making very few of these allocations (less than ten).
What I do not understand is the fact that an allocation fails, no matter how low physical memory is at that point. I thought that—as long as there's swap space left—memory allocation would not fail (as the pages are not even mapped at this point).
The application is garbage collected.
Edit:
I had time to dig into this problem a little further and here are my findings:
NSMutableData
fails, a plain malloc
still succeeds to allocate the same amount of memory.I assume NSData
uses NSAllocateCollectable
to perform the allocation instead of malloc
when running under garbage collection.
My conclusion from all that is that the collector is unable to allocate big chunks of memory when physical memory is low. Which again, I don't understand.
A memory allocation failure message means that the active controller is low on memory after allocating these resources and does not have enough remaining memory to control a stack member. You can correct this by reducing the number of VLANs or STP instances.
From the File menu, select Get Info, then Memory. (In versions before Mac OS 8.5, you only need to select Get Info.) The application's information window should open. Increase the application's memory allocation.
The short answer, is you do nothing. The system decides what memory gets swapped to disk and allocates space as needed. The system allocates virtual memory addresses up to ~18 exabytes so that it can then swap as needed.
Another guess, but it may be that your colleague's machine is configured with a stricter maximum memory per user process setting. To check, type
ulimit -a
Into a console. For me, I get:
~ iainmcgin$ ulimit -a core file size (blocks, -c) 0 data seg size (kbytes, -d) unlimited file size (blocks, -f) unlimited max locked memory (kbytes, -l) unlimited max memory size (kbytes, -m) unlimited open files (-n) 256 pipe size (512 bytes, -p) 1 stack size (kbytes, -s) 8192 cpu time (seconds, -t) unlimited max user processes (-u) 266 virtual memory (kbytes, -v) unlimited
From my settings above, it seems there is no per-process limit on memory usage. This may not be the case for your colleague, for some reason.
I'm using Snow Leopard:
~ iainmcgin$ uname -rs Darwin 10.6.0
The answer lies in the implementation of libauto.
As of OS X 10.6 an arena of 8 Gb is allocated for garbage collected memory on 64-bit platforms. This arena is cut in half for large allocations (>=128k) and small (<2048b) or medium (<128k) allocations.
So in effect on 10.6 you have 4Gb of memory available for large allocations of garbage collected memory. On 10.5 the arena had a size of 32Gb, but Apple lowered that size to 8Gb on 10.6.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With