Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

The Memory Usage and Fragmentation of .NET Image classes: Bitmap vs. Metafile

Tags:

.net

memory

image

Due to seemingly-premature Out of Memory exceptions, we have been examining closely the memory usage of various .NET constructs... particularly large objects that tend to fragment the Large Object Heap, causing premature Out of Memory exceptions. One area that has been a bit surprising is the .NET Image classes: Bitmap and Metafile.

Here's what we think we have learned, but have been unable to find MS documentation to verify, so we would appreciate any confirmation others can give:

(1) When you create a Bitmap object from a compressed raster file (JPG, PNG, GIF, etc), it consumes memory for a fully uncompressed pixel array, at the full resolution of that file. So, for example, a 5MB JPG that is 9000x3000 pixels would be expanded into 9000x3000x3 bytes (assuming 24bit color, no alpha), or 81MB of memory consumed. Correct?

(1a) There's some evidence (see 2b below) that it ALSO stores the original compressed format... so, actually 86MB in this case. But that's unclear... does anyone know?

(2) When you create a Metafile object and then draw a raster file (JPG, PNG, GIF, etc) into it, it only consumes memory for the compressed file. So, if you draw a 5MB JPG that is 9000x3000 pixels into a Metafile, it will only consume roughly 5MB of memory. Correct?

(2a) To draw a raster file into a Metafile object, the only way seems to be to load a Bitmap with the file and then draw the Bitmap into the Metafile. Is there a better way that doesn't involve temporarily loading that huge Bitmap data (and causing the associated memory fragmentation)?

(2b) When you draw a Bitmap into a Metafile, it uses a compressed format of size similar to the original compressed file. Does it do that by storing the original compressed file in the Bitmap? Or does it do it by re-compressing the expanded Bitmap using the original compression settings?

(3) We originally assumed that large (>85KB) Image objects would be placed in the Large Object Heap. In fact, that seems to NOT be the case. Rather, each Bitmap and each Metafile is a 24-byte object in the Small Object Heap that refers to a block of Native Memory that contains the real data. Correct?

(3a) We assume such Native Memory is like Large Object Heap in that it cannot be compacted... once the big object is laid into Native Memory, it will never be moved, and thus fragmentation of Native Memory can cause as many problems as fragmentation of Large Object Heap. True? Or is there special handling of the underlying Bitmap / Metafile data that is more efficient?

(3b) So, there seems to be four independent blocks of memory that are managed separately, and running out of each can result in the same Out of Memory exceptions: Small Object Heap (managed objects < 85KB, compacted by the GC), Large Object Heap (managed objects > 85KB that are collected by GC, but not compacted), Native Memory (unmanaged objects, presumably not compacted), and Desktop Heap (where windows handles and such limited resources are managed). Have I documented those four properly? Are there others we should be aware of?

Any clarity that anybody can provide on the above would be greatly appreciated. If there is a good book or article that fully explains the above, please let me know. (I am happy to do the required reading; but the vast majority of books don't get that deep, and thus don't tell me anything I don't already know.)

Thanks!

like image 902
Brian Kennedy Avatar asked Mar 20 '13 16:03

Brian Kennedy


2 Answers

There are two ways to store image data: as pixels, or as vectors. Bitmap is about pixels, Metafile is about pixels and vectors. Vector data is a lot more efficient to store.

To allow manipulation of bitmaps, their data must be stored uncompressed in memory. Otherwise GetPixel and SetPixel would have to uncompress, change, recompress the bitmap for every change (if that would be even possible to begin with).

Metafiles were created by Microsoft and intended to work with GDI, so it might incorporate some more memory efficient compression algorithms that work directly with the graphics card. Also, there are no GetPixel SetPixel methods for metafiles, so it does not have to be uncompressed in memory to allow manipulation.


You should not have to care about the memory pools that the runtime uses. There are way more, and the runtime decides where it puts objects. Also, you should not care about out-of-memory exceptions that may possibly arise from using (large) objects. The runtime will do all it can (putting objects in gaps between other objects, compacting heaps, expanding available virtual memory) to ensure you won't get an out-of-memory exception. If you somehow do get such an exception, there is probably another problem in your code that should be fixed (such as a memory leak).

An overview of the memory heaps, maps and tables: (source)

Heaps, maps and tables used in .NET


Also, your assumption that objects of over 85 KiB are placed on the Large Object Heap is not entirely correct. Is is correct for most objects in the current version of the CLR, but for example an 8 KiB array of doubles (1000 doubles) is also allocated on the Large Object Heap. Just let the runtime concern itself with this.

like image 59
Daniel A.A. Pelsmaeker Avatar answered Oct 15 '22 19:10

Daniel A.A. Pelsmaeker


I know the answers to some of these:

(1) Yes, that is the definition of a Bitmap image.

(3) Yes, that is why Bitmap implements the IDisposable interface.

(3a) That seems surprising. Are you running the Dispose() method on your Bitmap objects when you are done with them?

(3b) At least those four, yes.

like image 42
Pieter Geerkens Avatar answered Oct 15 '22 17:10

Pieter Geerkens