I have a base class Base that I declare several polymorphic subclasses of. Some of the base class's functions are pure virtual while others are used directly by the subclass.
(This is all in C++)
So for instance:
class Base
{
protected:
float my_float;
public:
virtual void Function() = 0;
void SetFloat(float value){ my_float = value}
}
class subclass : public Base
{
void Function(){ std::cout<<"Hello, world!"<<std::endl; }
}
class subclass2 : public Base
{
void Function(){ std::cout<<"Hello, mars!"<<std::endl; }
}
So as you can see, the subclasses would rely on the base class for the function that sets "my_float", but would be polymorphic with regards to the other function.
So I'm wondering if this is good practice. If you have an abstract base class, should you make it completely abstract or is it okay to do this sort of hybrid approach?
This is a common practice. In fact, some well-known design patterns rely on this, such as the Template Method Pattern. In a nutshell, this allows you to specify some aspects of the behavior you're describing through your class hierarchy as invariant, while letting other aspects of that behavior vary based on the specific type of instance you are referring to at a given point.
Whether or not it is a good or not depends on your precise use case: does it make sense for you to share the implementation of your float member data storage among all your base classes ? This is a bit hard to answer with the example you posted as the derived classes do not rely on my_float
in any way, but there are tons of cases where this makes sense and is a good way to split the responsibilities of your class hierarchy.
Even in cases where it does make sense to share implementation of details across classes, you have several other options, such as using composition to share functionality. Sharing functionality through a base class often allows you to be less verbose compared to sharing this functionality via composition, because it allows you to share both the implementation and the interface. To illustrate, your solution has less duplicated code than this alternative that uses composition:
class DataStorage {
private:
float data_;
public:
DataStorage()
: data_(0.f) {
}
void setFloat(float data) {
data_ = data;
}
};
class NotASubclass1 {
private:
DataStorage data_;
public:
void SetFloat(float value){ data_.setFloat(value); }
...
}
class NotASubclass2 {
private:
DataStorage data_;
public:
void SetFloat(float value){ data_.setFloat(value); }
...
}
Being able to have some functions non-virtual has certain benefits, many strongly related:
you can modify them, knowing invocations via a Base*
/Base&
will use your modified code regardless of what actual derived type the Base*
points to
for example, you can collect performance measurements for all Base*/&
s, regardless of their derivation
the Non-Virtual Interface (NVI) approach aims for "best of both worlds" - non-virtual functions call non-public virtual functions, giving you a single place to intercept calls via a Base*/&
in Base
as well as customisability
calls to the non-virtual functions will likely be faster - if inline, up to around an order of magnitude faster for trivial functions like get/set for few-byte fields
you can ensure invariants for all objects derived from Base
, selectively encapsulating some private data and the functions that affect it (the final
keyword introduced in C++11 lets you do this further down the hierarchy)
having data/functionality "finalised" in the Base
class aids understanding and reasoning about class behaviour, and the factoring makes for more concise code overall, but necessarily at the cost of frustrating flexibility and unforeseen reuse - tune to taste
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With