I have always been convinced that it is not a good practice to allocate large blocks of contiguous memory. It is clear that you are likely to run into trouble if memory fragmentation comes into play, which in most cases cannot be excluded for sure (especially in large projects designed as services or the like).
Recently I came accross the ITK image processing library and realized, that they (virtually) always allocate image data (even 3D - which might be huge) as one contiguous block. I was told that this should not be a problem, at least for 64 bit processes. However, I don't see a systematic difference between 64 bit and 32 bit processes besides the fact that memory problems might occurr delayed due to the larger virtual address space.
To come to the point: I wonder what is good practice when dealing with large amounts of data: Simply allocate it as one big block, or better split it up into smaller pieces for allocation?
As the question is of course rather system specific I would like to restrict it to native (unmanaged, no CLR) C++ especially under windows. However, I would be also interested in any more general comments - if possible.
The question almost seems nonsensical... let me rephrase it to illustrate:
If you need a large block of memory and are worried about fragmentation, should you just fragment it yourself?
You don't gain anything by fragmenting it yourself rather than letting the system memory manager fragment it for you. The system is extremely good at this, and you are not likely to do it better.
That being said, if all things being equal you can do the same task but broken into sensible fragments, it may be worth profiling to see if you can gain anything. But in general, you won't gain anything in a reasonable sense -- you won't be able to outperform the OS.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With