I have just received a comment, like
The problem is the manual memory management.
delete
has no place in user code, and as of C++14, nor hasnew
Can someone please explain me why?
In manual memory allocation, this is also specified manually by the programmer; via functions such as free() in C, or the delete operator in C++ – this contrasts with automatic destruction of objects held in automatic variables, notably (non-static) local variables of functions, which are destroyed at the end of their ...
The C programming language provides several functions for memory allocation and management. These functions can be found in the <stdlib. h> header file.
When a variable gets assigned in a memory in one program, that memory location cannot be used by another variable or another program. So, C language gives us a technique of allocating memory to different variables and programs.
With essentially zero optimization effort, the managed versions started off many times faster than the manual. Eventually the manual beat the managed, but only by optimizing to a level that most programmers would not want to go to. In all versions, the memory usage of the manual was significant better than the managed.
Caveat: I stand by this answer since I think it presents a best practice which will improve ~95% of C++ code – probably even more. That said, please read the full comments for a discussion of some important caveats.
Since it was my comment, here’s my presentation explaining this.
In a nutshell:
[Raw] pointers must. not. own. resources.
It’s error-prone and unnecessary because we have better ways of managing resources which result in less errors, shorter, more readable code and higher confidence in the correctness of the code. In economic terms: they cost less.
To be more specific with regards to the comment I made:
As of C++11 (out now for two years and implemented, in the relevant parts, by all modern compilers), manually deleting memory is completely unnecessary (unless you write very low-level memory handling code) because you can always use smart pointers instead, and usually don’t even need them (see the presentation). However, C++11 still requires you to use new
when instantiating a new std::unique_ptr
. In C++14, the function std::make_unique
makes this usage of new
unnecessary. Consequently, it’s not needed any more either.
There is still arguably a place for placement-new
in code, but this is (a) an entirely different case from normal new
, even though the syntax is similar, and (b) can be replaced in most cases by using the allocator::construct
function.
James has pointed out an exception to this rule which I had honestly forgotten about: when an object manages its own life-time. I’ll go out on a limb and say that this is not a common idiom in most scenarios, because object life-time can always be managed externally. However, in certain applications it may be beneficial to decouple the object from the rest of the code and let it manage itself. In that case, you need to dynamically allocate the object and deallocate it using delete this
.
Smart pointers
and in turn std::make_shared
and std::make_unique
should be used instead because dealing with new/delete
etc. is more prone to errors when applications throw exceptions etc.
Smart pointers automatically delete(utilising RAII) when used even when exceptions are thrown unlike new/delete which can leak memory
See this and this for more info
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With