Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Thinking of memory fragmentation while you code: Premature Optimization or not?

I'm working on a large server application written using C++. This server needs to run possibly for months without restarting. Fragmentation is already a suspected issue here, since our memory consumption increases over time. So far the measurement has been to compare private bytes with virtual bytes, and analyze the difference in those two numbers.

My general approach to fragmentation is to leave it to analysis. I have the same way of thinking about other things like general performance and memory optimizations. You have to back up the changes with analysis and proof.

I'm noticing a lot during code reviews or discussions, that memory fragmentation is one of the first things that comes up. It's almost like there's a huge fear of it now, and there's a big initiative to "prevent fragmentation" ahead of time. Code changes are requested that seem favorable to reducing or preventing memory fragmentation problems. I tend to disagree with these right off the bat since they seem like premature optimization to me. I would be sacrificing code cleanliness/readability/maintainability/etc. in order to satisfy these changes.

For example, take the following code:

std::stringstream s;
s << "This" << "Is" << "a" << "string";

Above, the number of allocations the stringstream makes here is undefined, it could be 4 allocations, or just 1 allocation. So we can't optimize based on that alone, but the general consensus is to either use a fixed buffer or somehow modify the code to potentially use less allocations. I don't really see the stringstream expanding itself here as a huge contributor to memory problems, but maybe I'm wrong.

General improvement suggestions to code above are along the lines of:

std::stringstream s;
s << "This is a string"; // Combine it all to 1 line, supposedly less allocations?

There is also a huge push to use the stack over heap where ever possible.

Is it possible to be preemptive about memory fragmentation in this way, or is this simply a false sense of security?

like image 308
void.pointer Avatar asked May 17 '12 03:05

void.pointer


3 Answers

It's not premature optimization if you know in advance that you need to be low-fragmentation and you have measured in advance that fragmentation is an actual problem for you and you know in advance which segments of your code are relevant. Performance is a requirement, but blind optimization is bad in any situation.

However, the superior approach is to use a fragmentation-free custom allocator, like object pool or memory arena, which guarantees no fragmentation. For example, in a physics engine, you can use a memory arena for all per-tick allocations and empty it at the end, which is not only ludicrously fast (even faster than _alloca on VS2010) but also extremely memory efficient and low fragmentation.

like image 56
Puppy Avatar answered Oct 17 '22 13:10

Puppy


It is absolutely reasonable to consider memory fragmentation at the algorithmic level. It is also reasonable to allocate small, fixed-sized objects on the stack to avoid the cost of an unnecessary heap allocation and free. However, I would definitely draw the line at anything that makes the code harder to debug, analyze, or maintain.

I would also be concerned that there's a lot of suggestions that are just plain wrong. Probably 1/2 of the things people typically say should be done "to avoid memory fragmentation" likely have no effect whatsoever and a sizable fraction of the rest are likely harmful.

For most realistic, long-running server type applications on typical modern computing hardware, fragmentation of user-space virtual memory just won't be an issue with simple, straight-forwarded coding.

like image 6
David Schwartz Avatar answered Oct 17 '22 13:10

David Schwartz


I think it is more than a best practice than a premature optimization. If you have a test suite you can create a set of memory tests to run and measure memory, performance, etc, in the night period for example. You can read the reports and fix some errors if possible.

The problem with small optimizations is change the code for something different but with the same business logic. Like using a reverse for loop because it is faster than regular for. your unit test probably guide you to optimize some points without side effects.

like image 1
Tiago Peczenyj Avatar answered Oct 17 '22 14:10

Tiago Peczenyj