While at their core dynamic binding and templates are fundamentally different things, they can be used to implement the same functionality.
namespace DB {
// interface
class CustomCode {
public:
virtual void operator()(char) const = 0;
};
class Lib {
public:
void feature(CustomCode const& c) {
c('d');
}
};
// user code
class MyCode1 : public CustomCode {
public:
void operator()(char i) const {
std::cout << "1: " << i << std::endl;
}
};
class MyCode2 : public CustomCode {
public:
void operator()(char i) const {
std::cout << "2: " << i << std::endl;
}
};
void use() {
Lib lib;
lib.feature(MyCode1());
lib.feature(MyCode2());
}
}
namespace GP {
//interface
template <typename CustomCode> class Lib {
public:
void feature(CustomCode const& c) {
c('g');
}
};
// user code
class MyCode1 {
public:
void operator()(char i) const {
std::cout << "1: " << i << std::endl;
}
};
class MyCode2 {
public:
void operator()(char i) const {
std::cout << "2: " << i << std::endl;
}
};
void use() {
Lib<MyCode1> lib;
lib.feature(MyCode1());
//lib.feature(MyCode2()); <-- illegal
}
}
While these paradigms are not identical and have their advantages and disadvantages (A
is a bit more powerful (see MyCode2
) and B
is more flexible for the user) they both allow the implementation of the same functionality (while the limitations hinted above apply).
Anyway, in theory (TM) A
is a bit slower at runtime because of the indirection of the virtual function, while B
offers some great optimisation opportunities as methods can be inlined (and of course you don't have the indirection).
However, I often feel that A
is a bit more self-documenting because you have a clear interface you have to implement (which usually consists of more than one method) while B
is a bit more anarchistic (which implies its flexibility).
A
for my inter-module interfaces and I haven't done really really big projects so far)?Note: Saying "dynamic binding is better because it is more powerful" is not at all an answer, because the precondition is that you have a case where both approaches are applicable (otherwise there is no freedom to choose -- at least not reasonably).
Dynamic binding or late binding is the mechanism a computer program waits until runtime to bind the name of a method called to an actual subroutine. It is an alternative to early binding or static binding where this process is performed at compile-time.
The major advantage of dynamic binding is that it is flexible since a single function can handle different types of objects at runtime. This significantly reduces the size of the codebase and also makes the source code more readable.
The static binding uses Type information for binding while Dynamic binding uses Objects to resolve to bind. Overloaded methods are resolved (deciding which method to be called when there are multiple methods with the same name) using static binding while overridden methods use dynamic binding, i.e, at run time.
Are there any general results / comparative studies of these paradigms?
from what i have seen, many examples of proofs can be found in articles and publications. your favorite c++ books should provide several demonstrations; if you have no such resource, you may want to read Modern C++ Design: Generic Programming and Design Patterns Applied - A. Alexandrescu. although, there is not a specific resource that comes to mind that directly answers your question. as well, the result will vary by implementation and compiler - even compiler settings can greatly affect the outcome of such a test. (responding to each of your questions, although this does not qualify as an answer to this specific question).
Is the speed-up significant?
short answer: it depends.
in your example, the compiler could in fact use static dispatch or even inline the virtual function calls (enough information is visible to the compiler). i am now going to move the responses away from a trivial example (specifically, the OP) to larger, more complex programs.
expanding on 'it depends': yes, the speed up can range from unmeasurable to huge. you have to (and likely already) realize that the compiler can be provided an incredible amount of information at compilation via generics. it can then use this information to optimize your program much more accurately. one good example of this is the use of std::array
vs std::vector
. the vector adds flexibility at runtime, but the cost can be quite significant. the vector needs to implement more for resizing, the need for dynamic allocations can be costly. there are other differences: the backing allocation of the array will not change (++optimization), the element count is fixed (++optimization), and again - there's no need to call through new in many cases.
you may now be thinking this example has significantly deviated from the original question. in many ways, it's really not so different: the compiler knows more and more about your program as its complexity expands. this information can remove several portions of your program (dead code) and using std::array
as an example, the information the type provides is enough such that a compiler can easily say "oh, i see that this array's size is seven elements, i will unroll the loop accordingly" and you will have fewer instructions and will have eliminated mispredictions. there's a lot more to it, but in the array/vector case, i have seen executable size of optimized programs reduce to 20% when converting from vector
to an interface similar to array
. as well, the code can perform several times faster. in fact, some expressions can be calculated entirely at compilation.
dynamic dispatch still has its virtues, and using dynamic dispatch can also improve your program's speed if used correctly - what you will really need to learn comes down to deciding when to favor one over the other. similar to how a huge function with many variables cannot be optimized very effectively (the result of all that template expansion in a real program), a virtual function call can in fact be a faster, cleaner approach in many situations. as such, they are two separate features, you will need some practice to determine what is right (and many programmers don't take the time to learn this well enough).
in conclusion, they should be regarded as separate features, applicable to different scenarios. these should (imho) have have much less practical overlap than they actually do in the real world.
What about compilation time?
with templates, the compilation and link times during development can be quite high. each time a header/template changes, you will require a compilation on all dependencies -- this can often be a significant boon for favoring dynamic dispatch. you can of course reduce this if you plan ahead and build out appropriately - understanding how is a much more difficult subject to master with templates. with templates, you not only increrase the frequency of large builds, you often increase the time and complexity of large builds. (more notes follow)
What are the design implications of either for interfaces in larger systems (I mainly used A for my inter-module interfaces and I haven't done really really big projects so far)?
it really depends on your program's expectations. i write virtual
less every year (and many others as well). among other approaches, templates are becoming more and more common. honestly, i don't understand how B
is 'anarchistic'. to me, A
is a bit anachronistic as there are plenty of suitable alternatives. it's ultimately a design choice which can take a lot of consideration to architect large systems well. a good system will use a healthy combination of the language's features. history proves no feature in this discussion is necessary to write a nontrivial program, but all features were added because somebody saw better alternatives in some specific uses. you should also expect lambdas to replace virtuals in more than 50% of their current uses in some (not all) teams/codebases.
Generalizations:
Bottom Line a large, well designed modern system will use many paradigms effectively and simultaneously. if you use virtuals most of the time at present, you are (imo) doing it wrong -- especially if that is still the approach once you've had time to absorb c++11. if speed, performance, and/or parallelism are also significant concerns, then templates and lambdas deserve to be your closer friends.
Which is better? It depends. You've focused on the overlap. Better is to focus on where the approaches diverge. You also missed where you need to use both approaches simultaneously.
The biggest advantage of templates is that they offer the ability to cut down, sometimes immensely, on cookie-cutter code. Another advantage of templates is metaprogramming. There are some truly bizarre things you can do thanks to SFINAE.
One disadvantage of templates is that the syntax is a bit clunky. There is no way around this. It is what it is. Another disadvantage of templates is that each instantiation is a different class, completely unrelated to the other classes instantiated from the same template class. There is a way around this: Combine both approaches. Make your template class derive from some non-template base class. Of course, now you have lost some of the run time advantages.
The biggest advantage of polymorphism is that it is dynamic. This can be a huge, huge win. Don't discount it. There is a performance penalty for such polymorphism, but you are going to pay that penalty one way or another if you want to have a collection of objects that obey a common interface but different objects have different implementations for that interface.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With