Hei community,
I've got a small question concerning the deletion of pointers.
I am working with pointer-to-pointer matrices of Dimension 1024x1024. Since I am creating them dynamically, I delete the allocated space for them at the end of the program. But doing this in the usual loop costs quite a lot of time - I measured about 2sec using the clockrate of the processor. And 2 Seconds is HUGE when the program runs only 15 seconds - plus: the function using these allocated pointers is called more than only once... .
Here is the measured time-critical piece of code including the measurement:
time=clock();
for(i=0;i<xSize;i++){ //xSize is dynamic, but 1024 for the measurement
delete [] inDaten[i];
delete [] inDaten2[i];
delete [] copy[i];
}
delete inDaten; delete inDaten2; delete copy;
time=clock()-time;
time/=CLOCKS_PER_SEC;
Is deleting pointers ALWAYS that long? Or am I just doing things the wrong way?
I hope someone here can help me out with that. Since I am optimizing a quite complex programm to run faster, I can't use those 2sec-piece of code. It is just way TOO slow compared to all the other parts. But still I need to be able to implement this code dynamically. SmartPointers could be helpful, but if I understand correctly, they also need the time to delete themselves - just at a different time...
Thanks for your answers!
Baradrist
EDIT: I just found out, that measuring these delete-computations is quite slow because I didn't compile it in release-mode. Since the debugger comes into play, I measured these (in the end unreal) numbers that got me a headache. The final program optimizes automatically enough so that there is nearly no time involved in the deletion any more.
Anyways: thanks for all the helpful answers! They got me a lot of extra-knowledge and things to think about!!!!
delete[]
will also call destructors for each element of the array which adds time unless the destructor is trivial.
Other than that - yes, dynamic memory allocation is relatively costly. If you can't tolerate it - try to allocate less blocks of larger size or go without dynamic allocation in time-critical stuff.
Smart pointers won't help much - they will do the same deallocation inside. They are not for speedup, but for design convenience.
Here is an interesting thread "Memory Allocation/Deallocation Bottleneck?"
Allocation and deallocation take a long time and thus are one of the most common costly operations you have. This is because the heap management has to take care of a bunch of things. Usually there are also more checks on the memory blocks in debug mode. If you have the same time in release configuration I would be surprised, usually there is a factor in between of at least 2. With a private heap you can increase the things dramatically. If you allocate always objects of the same size then a memory pool could be the best alternative.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With