I'm having an issue with (specifically the MSFT VS 10.0 implementation of) std::unique_ptrs. When I create a std::list of them, I use twice as much memory as when I create a std::list of just the underlying object (note: this is a big object -- ~200 bytes, so it's not just an extra reference counter lying around).
In other words, if I run:
std::list<MyObj> X;
X.resize( 1000, MyObj());
my application will require half as much memory as when I run:
std::list<std::unique_ptr<MyObj>> X;
for ( int i=0; i<1000; i++ ) X.push_back(std::unique_ptr<MyObj>(new MyObj()));
I've checked out the MSFT implementation and I don't see anything obvious -- any one encountered this and have any ideas?
EDIT: Ok, to be a bit more clear/specific. This is clearly a Windows memory usage issue and I am obviously missing something. I have now tried the following:
std::list
of 100000 MyObj std::list
of 100000 MyObj* std::list
of 100000 int* std::list
of 50000 int*In each case, each add'l member of the list, whether a pointer or otherwise, is bloating my application by 4400(!) bytes. This is in a release, 64-bit build, without any debugging information included (Linker > Debugging > Generate Debug Info set to No).
I obviously need to research this a bit more to narrow it down to a smaller test case.
For those interested, I am determining application size using Process Explorer.
Turns out it was entirely heap fragmentation. How ridiculous. 4400 bytes per 8 byte object! I switched to pre-allocating and the problem went away entirely -- I am used to some inefficiency in relying on per-object allocation, but this was just ridiculous.
MyObj implementation below:
class MyObj
{
public:
MyObj() { memset(this,0,sizeof(MyObj)); }
double m_1;
double m_2;
double m_3;
double m_4;
double m_5;
double m_6;
double m_7;
double m_8;
double m_9;
double m_10;
double m_11;
double m_12;
double m_13;
double m_14;
double m_15;
double m_16;
double m_17;
double m_18;
double m_19;
double m_20;
double m_21;
double m_22;
double m_23;
CUnit* m_UnitPtr;
CUnitPos* m_UnitPosPtr;
};
important!: since unique_ptr is a moveable type only, the compiler will warn us about attempts to copy the whole parent object.
auto ptr = make_unique<int>(); // Create a new unique_ptr object. auto ptr = make_unique<int>(); The dynamically allocated object is destroyed when the created unique pointer object is destroyed.
Pointers: Pointing to the Same AddressThere is no limit on the number of pointers that can hold (and therefore point to) the same address.
The added memory is likely from heap inefficiencies - you have to pay extra for each block you allocate due to internal fragmentation and malloc data. You're performing twice the amount of allocations which is going to incur a penalty hit.
For instance, this:
for(int i = 0; i < 100; ++i) {
new int;
}
will use more memory than this:
new int[100];
Even though the amount allocated is the same.
Edit:
I'm getting around 13% more memory used using unique_ptr using GCC on Linux.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With