Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Faster to malloc multiple small times or few large times?

When using malloc to allocate memory, is it generally quicker to do multiple mallocs of smaller chunks of data or fewer mallocs of larger chunks of data? For example, say you are working with an image file that has black pixels and white pixels. You are iterating through the pixels and want to save the x and y position of each black pixel in a new structure that also has a pointer to the next and previous pixels x and y values. Would it be generally faster to iterate through the pixels allocating a new structure for each black pixel's x and y values with the pointers, or would it be faster to get a count of the number of black pixels by iterating through once, then allocating a large chunk of memory using a structure containing just the x and y values, but no pointers, then iterating through again, saving the x and y values into that array? I'm assuming certain platforms might be different than others as to which is faster, but what does everyone think would generally be faster?

like image 908
user21293 Avatar asked Jul 07 '09 19:07

user21293


4 Answers

It depends:

  • Multiple small times means multiple times, which is slower
  • There may be a special/fast implementation for small allocations.

If I cared, I'd measure it! If I really cared a lot, and couldn't guess, then I might implement both, and measure at run-time on the target machine, and adapt accordingly.

In general I'd assume that fewer is better: but there are size and run-time library implementations such that a (sufficiently) large allocation will be delegated to the (relatively slow) O/S. whereas a (sufficiently) small allocation will be served from a (relatively quick) already-allocated heap.

like image 85
ChrisW Avatar answered Nov 09 '22 11:11

ChrisW


Allocating large blocks is more efficient; additionally, since you are using larger contiguous blocks, you have greater locality of reference, and traversing your in-memory structure once you've generated it should also be more efficient! Further, allocating large blocks should help to reduce memory fragmentation.

like image 33
Nick Lewis Avatar answered Nov 09 '22 09:11

Nick Lewis


Generally speaking, allocating larger chunks of memory fewer times will be faster. There's overhead involved each time a call to malloc() is made.

like image 35
Jeff L Avatar answered Nov 09 '22 09:11

Jeff L


Except speed issues there is also the memory fragmentation problem.

like image 41
Nick Dandoulakis Avatar answered Nov 09 '22 09:11

Nick Dandoulakis