I've been thinking about the possible use of delete this in c++, and I've seen one use.
Because you can say delete this only when an object is on heap, I can make the destructor private and stop objects from being created on stack altogether. In the end I can just delete the object on heap by saying delete this in a random public member function that acts as a destructor. My questions:
1) Why would I want to force the object to be made on the heap instead of on the stack?
2) Is there another use of delete this apart from this? (supposing that this is a legitimate use of it :) )
Any scheme that uses delete this is somewhat dangerous, since whoever called the function that does that is left with a dangling pointer. (Of course, that's also the case when you delete an object normally, but in that case, it's clear that the object has been deleted). Nevertheless, there are somewhat legitimate cases for wanting an object to manage its own lifetime.
It could be used to implement a nasty, intrusive reference-counting scheme. You would have functions to "acquire" a reference to the object, preventing it from being deleted, and then "release" it once you've finished, deleting it if noone else has acquired it, along the lines of:
class Nasty {
public:
Nasty() : references(1) {}
void acquire() {
++references;
}
void release() {
if (--references == 0) {
delete this;
}
}
private:
~Nasty() {}
size_t references;
};
// Usage
Nasty * nasty = new Nasty; // 1 reference
nasty->acquire(); // get a second reference
nasty->release(); // back to one
nasty->release(); // deleted
nasty->acquire(); // BOOM!
I would prefer to use std::shared_ptr for this purpose, since it's thread-safe, exception-safe, works for any type without needing any explicit support, and prevents access after deleting.
More usefully, it could be used in an event-driven system, where objects are created, and then manage themselves until they receive an event that tells them that they're no longer needed:
class Worker : EventReceiver {
public:
Worker() {
start_receiving_events(this);
}
virtual void on(WorkEvent) {
do_work();
}
virtual void on(DeleteEvent) {
stop_receiving_events(this);
delete this;
}
private:
~Worker() {}
void do_work();
};
1) Why would I want to force the object to be made on the heap instead of on the stack?
1) Because the object's lifetime is not logically tied to a scope (e.g., function body, etc.). Either because it must manage its own lifespan, or because it is inherently a shared object (and thus, its lifespan must be attached to those of its co-dependent objects). Some people here have pointed out some examples like event handlers, task objects (in a scheduler), and just general objects in a complex object hierarchy.
2) Because you want to control the exact location where code is executed for the allocation / deallocation and construction / destruction. The typical use-case here is that of cross-module code (spread across executables and DLLs (or .so files)). Because of issues of binary compatibility and separate heaps between modules, it is often a requirement that you strictly control in what module these allocation-construction operations happen. And that implies the use of heap-based objects only.
2) Is there another use of delete this apart from this? (supposing that this is a legitimate use of it :) )
Well, your use-case is really just a "how-to" not a "why". Of course, if you are going to use a delete this; statement within a member function, then you must have controls in place to force all creations to occur with new (and in the same translation unit as the delete this; statement occurs). Not doing this would just be very very poor style and dangerous. But that doesn't address the "why" you would use this.
1) As others have pointed out, one legitimate use-case is where you have an object that can determine when its job is over and consequently destroy itself. For example, an event handler deleting itself when the event has been handled, a network communication object that deletes itself once the transaction it was appointed to do is over, or a task object in a scheduler deleting itself when the task is done. However, this leaves a big problem: signaling to the outside world that it no longer exists. That's why many have mentioned the "intrusive reference counting" scheme, which is one way to ensure that the object is only deleted when there are no more references to it. Another solution is to use a global (singleton-like) repository of "valid" objects, in which case any accesses to the object must go through a check in the repository and the object must also add/remove itself from the repository at the same time as it makes the new and delete this; calls (either as part of an overloaded new/delete, or alongside every new/delete calls).
However, there is a much simpler and less intrusive way to achieve the same behavior, albeit less economical. One can use a self-referencing shared_ptr scheme. As so:
class AutonomousObject {
private:
std::shared_ptr<AutonomousObject> m_shared_this;
protected:
AutonomousObject(/* some params */);
public:
virtual ~AutonomousObject() { };
template <typename... Args>
static std::weak_ptr<AutonomousObject> Create(Args&&... args) {
std::shared_ptr<AutonomousObject> result(new AutonomousObject(std::forward<Args>(args)...));
result->m_shared_this = result; // link the self-reference.
return result; // return a weak-pointer.
};
// this is the function called when the life-time should be terminated:
void OnTerminate() {
m_shared_this.reset( NULL ); // do not use reset(), but use reset( NULL ).
};
};
With the above (or some variations upon this crude example, depending on your needs), the object will be alive for as long as it deems necessary and that no-one else is using it. The weak-pointer mechanism serves as the proxy to query for the existence of the object, by possible outside users of the object. This scheme makes the object a bit heavier (has a shared-pointer in it) but it is easier and safer to implement. Of course, you have to make sure that the object eventually deletes itself, but that's a given in this kind of scenario.
2) The second use-case I can think of ties in to the second motivation for restricting an object to be heap-only (see above), however, it applies also for when you don't restrict it as such. If you want to make sure that both the deallocation and the destruction are dispatched to the correct module (the module from which the object was allocated and constructed), you must use a dynamic dispatching method. And for that, the easiest is to just use a virtual function. However, a virtual destructor is not going to cut it because it only dispatches the destruction, not the deallocation. The solution is to use a virtual "destroy" function that calls delete this; on the object in question. Here is a simple scheme to achieve this:
struct CrossModuleDeleter; //forward-declare.
class CrossModuleObject {
private:
virtual void Destroy() /* final */;
public:
CrossModuleObject(/* some params */); //constructor can be public.
virtual ~CrossModuleObject() { }; //destructor can be public.
//.... whatever...
friend struct CrossModuleDeleter;
template <typename... Args>
static std::shared_ptr< CrossModuleObject > Create(Args&&... args);
};
struct CrossModuleDeleter {
void operator()(CrossModuleObject* p) const {
p->Destroy(); // do a virtual dispatch to reach the correct deallocator.
};
};
// In the cpp file:
// Note: This function should not be inlined, so stash it into a cpp file.
void CrossModuleObject::Destroy() {
delete this;
};
template <typename... Args>
std::shared_ptr< CrossModuleObject > CrossModuleObject::Create(Args&&... args) {
return std::shared_ptr< CrossModuleObject >( new CrossModuleObject(std::forward<Args>(args)...), CrossModuleDeleter() );
};
The above kind of scheme works well in practice, and it has the nice advantage that the class can act as a base-class with no additional intrusion by this virtual-destroy mechanism in the derived classes. And, you can also modify it for the purpose of allowing only heap-based objects (as usually, making constructors-destructors private or protected). Without the heap-based restriction, the advantage is that you can still use the object as a local variable or data member (by value) if you want, but, of course, there will be loop-holes left to avoid by whoever uses the class.
As far as I know, these are the only legitimate use-cases I have ever seen anywhere or heard of (and the first one is easily avoidable, as I have shown, and often should be).
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With