Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Default to making classes either `final` or give them a virtual destructor?

Classes with non-virtual destructors are a source for bugs if they are used as a base class (if a pointer or reference to the base class is used to refer to an instance of a child class).

With the C++11 addition of a final class, I am wondering if it makes sense to set down the following rule:

Every class must fulfil one of these two properties:

  1. be marked final (if it is not (yet) intended to be inherited from)
  2. have a virtual destructor (if it is (or is intended to) be inherited from)

Probably there are cases were neither of these two options makes sense, but I guess they could be treated as exceptions that should be carefully documented.

like image 305
Simon Avatar asked Jun 06 '15 17:06

Simon


1 Answers

The probably most common actual issue attributed to the lack of a virtual destructor is deletion of an object through a pointer to a base class:

struct Base { ~Base(); };
struct Derived : Base { ~Derived(); };

Base* b = new Derived();
delete b; // Undefined Behaviour

A virtual destructor also affects the selection of a deallocation function. The existence of a vtable also influences type_id and dynamic_cast.

If your class isn't use in those ways, there's no need for a virtual destructor. Note that this usage is not a property of a type, neither of type Base nor of type Derived. Inheritance makes such an error possible, while only using an implicit conversion. (With explicit conversions such as reinterpret_cast, similar problems are possible without inheritance.)

By using smart pointers, you can prevent this particular problem in many cases: unique_ptr-like types can restrict conversions to a base class for base classes with a virtual destructor (*). shared_ptr-like types can store a deleter suitable for deleting a shared_ptr<A> that points to a B even without virtual destructors.

(*) Although the current specification of std::unique_ptr doesn't contain such a check for the converting constructor template, it was restrained in an earlier draft, see LWG 854. Proposal N3974 introduces the checked_delete deleter, which also requires a virtual dtor for derived-to-base conversions. Basically, the idea is that you prevent conversions such as:

unique_checked_ptr<Base> p(new Derived); // error

unique_checked_ptr<Derived> d(new Derived); // fine
unique_checked_ptr<Base> b( std::move(d) ); // error

As N3974 suggests, this is a simple library extension; you can write your own version of checked_delete and combine it with std::unique_ptr.


Both suggestions in the OP can have performance drawbacks:

  1. Mark a class as final

This prevents the Empty Base Optimization. If you have an empty class, its size must still be >= 1 byte. As a data member, it therefore occupies space. However, as a base class, it is allowed not to occupy a distinct region of memory of objects of the derived type. This is used e.g. to store allocators in StdLib containers. C++20 has mitigated this with the introduction of [[no_unique_address]].

  1. Have a virtual destructor

If the class doesn't already have a vtable, this introduces a vtable per class plus a vptr per object (if the compiler cannot eliminate it entirely). Destruction of objects can become more expensive, which can have an impact e.g. because it's no longer trivially destructible. Additionally, this prevents certain operations and restricts what can be done with that type: The lifetime of an object and its properties are linked to certain properties of the type such as trivially destructible.


final prevents extensions of a class via inheritance. While inheritance is typically one of the worst ways to extend an existing type (compared to free functions and aggregation), there are cases where inheritance is the most adequate solution. final restricts what can be done with the type; there should be a very compelling and fundamental reason why I should do that. One cannot typically imagine the ways others want to use your type.

T.C. points out an example from the StdLib: deriving from std::true_type and similarly, deriving from std::integral_constant (e.g. the placeholders). In metaprogramming, we're typically not concerned with polymorphism and dynamic storage duration. Public inheritance often just the simplest way to implement metafunctions. I do not know of any case where objects of metafunction type are allocated dynamically. If those objects are created at all, it's typically for tag dispatching, where you'd use temporaries.


As an alternative, I'd suggest using a static analyser tool. Whenever you derive publicly from a class without a virtual destructor, you could raise a warning of some sort. Note that there are various cases where you'd still want to derive publicly from some base class without a virtual destructor; e.g. DRY or simply separation of concerns. In those cases, the static analyser can typically be adjusted via comments or pragmas to ignore this occurrence of deriving from a class w/o virtual dtor. Of course, there need to be exceptions for external libraries such as the C++ Standard Library.

Even better, but more complicated is analysing when an object of class A w/o virtual dtor is deleted, where class B inherits from class A (the actual source of UB). This check is probably not reliable, though: The deletion can happen in a Translation Unit different to the TU where B is defined (to derive from A). They can even be in separate libraries.

like image 173
dyp Avatar answered Sep 29 '22 12:09

dyp