I have some doubts while comparing C++ and Java multiple inheritance.
Even Java uses multiple, multi-level inheritance through interfaces - but why doesnt it use anything like a virtual base class as in C++ ? Is it because the members of a java interface are being ensured one copy in memory (they are public static final), and the methods are only declared and not defined ?
Apart from saving memory, is there any other use of virtual classes in C++ ? Are there any caveats if I forget to use this feature in my multiple inheritance programs ?
This one is a bit philosophical - but why didnt the C++ developers made it a default to make every base class, virtual ? What was the need of providing flexibility ?
Examples will be appreciated. Thanks !!
1) Java interfaces dont have attributes. One reason for virtual base classes in c++ is to prevent duplicate attributes and all the difficulties associated with that.
2) There is at least a slight performance penalty for using virtual base classes in c++. Also, the constructors become so complicated, that it is advised that virtual base classes only have no-argument constructors.
3) Exactly because of the c++ philosphy: One should not require a penalty for something which one may not need.
Sorry - not a Java programmer, so short on details. Still, virtual bases are a refinement of multiple inheritance, which Java designers always defended ommiting on the basis that it's overly complicated and arguably error-prone.
virtual bases aren't just for saving memory - the data is shared by the different objects inheriting from them, so those derived types could use it to coordinate their behaviour in some way. They're not useful all that often, but as an example: object identifiers where you want one id per most-derived object, and not to count all the subobjects. Another example: ensuring that a multiply-derived type can unambiguously map / be converted to a pointer-to-base, keeping it easy to use in functions operating on the base type, or to store in containers of Base*.
As C++ is currently Standardised, a type deriving from two classes can typically expect them to operate independently and as objects of that type tend to do when created on the stack or heap. If everything was virtual, suddenly that independence becomes highly dependent on the types from which they happen to be derived - all sorts of interactions become the default, and derivation itself becomes less useful. So, your question is why not make the default virtual - well, because it's the less intuitive, more dangerous and error-prone of the two modes.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With