Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Is it better to allocate memory in the power of two?

When we use malloc() to allocate memory, should we give the size which is in power of two? Or we just give the exact size that we need?
Like

//char *ptr= malloc( 200 );  char *ptr= malloc( 256 );//instead of 200 we use 256 

If it is better to give size which is in the power of two, what is the reason for that? Why is it better?

Thanks

Edit

The reason of my confusion is following quote from Joel's blog Back to Basics

Smart programmers minimize the potential distruption of malloc by always allocating blocks of memory that are powers of 2 in size. You know, 4 bytes, 8 bytes, 16 bytes, 18446744073709551616 bytes, etc. For reasons that should be intuitive to anyone who plays with Lego, this minimizes the amount of weird fragmentation that goes on in the free chain. Although it may seem like this wastes space, it is also easy to see how it never wastes more than 50% of the space. So your program uses no more than twice as much memory as it needs to, which is not that big a deal.

Sorry, I should have posted the above quote earlier. My apologies!

Most replies, so far, say that allocating memory in the power of two is a bad idea, then in which scenario its better to follow Joel's point about malloc()? Why did he say that? Is the above quoted suggestion obsolete now?

Kindly explain it.
Thanks

like image 983
Andrew-Dufresne Avatar asked Jul 06 '10 20:07

Andrew-Dufresne


People also ask

Which memory allocation is faster?

In this memory allocation scheme, execution is faster than dynamic memory allocation. In this memory allocation scheme, execution is slower than static memory allocation. In this memory is allocated at compile time.

When should you allocate memory?

Reasons and Advantage of allocating memory dynamically:When we do not know how much amount of memory would be needed for the program beforehand. When we want data structures without any upper limit of memory space. When you want to use your memory space more efficiently.

Is it better to use dynamic memory or statically allocated?

In static memory allocation, while executing a program, the memory cannot be changed. In dynamic memory allocation, while executing a program, the memory can be changed. Static memory allocation is preferred in an array. Dynamic memory allocation is preferred in the linked list.

What happens when you allocate memory?

Memory allocation is the process of setting aside sections of memory in a program to be used to store variables, and instances of structures and classes.


2 Answers

Just give the exact size you need. The only reason that a power-of-two size might be "better" is to allow quicker allocation and/or to avoid memory fragmentation.

However, any non-trivial malloc implementation that concerns itself with being efficient will internally round allocations up in this way if and when it is appropriate to do so. You don't need to concern yourself with "helping" malloc; malloc can do just fine on its own.

Edit:

In response to your quote of the Joel on Software article, Joel's point in that section (which is hard to correctly discern without the context that follows the paragraph that you quoted) is that if you are expecting to frequently re-allocate a buffer, it's better to do so multiplicatively, rather than additively. This is, in fact, exactly what the std::string and std::vector classes in C++ (among others) do.

The reason that this is an improvement is not because you are helping out malloc by providing convenient numbers, but because memory allocation is an expensive operation, and you are trying to minimize the number of times you do it. Joel is presenting a concrete example of the idea of a time-space tradeoff. He's arguing that, in many cases where the amount of memory needed changes dynamically, it's better to waste some space (by allocating up to twice as much as you need at each expansion) in order to save the time that would be required to repeatedly tack on exactly n bytes of memory, every time you need n more bytes.

The multiplier doesn't have to be two: you could allocate up to three times as much space as you need and end up with allocations in powers of three, or allocate up to fifty-seven times as much space as you need and end up with allocations in powers of fifty-seven. The more over-allocation you do, the less frequently you will need to re-allocate, but the more memory you will waste. Allocating in powers of two, which uses at most twice as much memory as needed, just happens to be a good starting-point tradeoff until and unless you have a better idea of exactly what your needs are.

He does mention in passing that this helps reduce "fragmentation in the free chain", but the reason for that is more because of the number and uniformity of allocations being done, rather than their exact size. For one thing, the more times you allocate and deallocate memory, the more likely you are to fragment the heap, no matter in what size you're allocating. Secondly, if you have multiple buffers that you are dynamically resizing using the same multiplicative resizing algorithm, then it's likely that if one resizes from 32 to 64, and another resizes from 16 to 32, then the second's reallocation can fit right where the first one used to be. This wouldn't be the case if one resized from 25 to 60 and and the other from 16 to 26.

And again, none of what he's talking about applies if you're going to be doing the allocation step only once.

like image 120
Tyler McHenry Avatar answered Oct 08 '22 01:10

Tyler McHenry


Just to play devil's advocate, here's how Qt does it:

Let's assume that we append 15000 characters to the QString string. Then the following 18 reallocations (out of a possible 15000) occur when QString runs out of space: 4, 8, 12, 16, 20, 52, 116, 244, 500, 1012, 2036, 4084, 6132, 8180, 10228, 12276, 14324, 16372. At the end, the QString has 16372 Unicode characters allocated, 15000 of which are occupied.

The values above may seem a bit strange, but here are the guiding principles:

QString allocates 4 characters at a time until it reaches size 20. From 20 to 4084, it advances by doubling the size each time. More precisely, it advances to the next power of two, minus 12. (Some memory allocators perform worst when requested exact powers of two, because they use a few bytes per block for book-keeping.) From 4084 on, it advances by blocks of 2048 characters (4096 bytes). This makes sense because modern operating systems don't copy the entire data when reallocating a buffer; the physical memory pages are simply reordered, and only the data on the first and last pages actually needs to be copied.

I like the way they anticipate operating system features in code that is meant to perform well from smartphones to server farms. Given that they're smarter people than me, I'd assume that said feature is available in all modern OSes.

like image 32
György Andrasek Avatar answered Oct 08 '22 00:10

György Andrasek