Consider:
#include<iostream>
using namespace std;
class Base
{
public:
virtual void show() { cout<<" In Base \n"; }
};
class Derived: public Base
{
public:
void show() { cout<<"In Derived \n"; }
};
int main(void)
{
Base *bp = new Derived;
bp->show(); // RUN-TIME POLYMORPHISM
return 0;
}
Why does this code cause run-time polymorphism and why can not it be solved at compile time?
Runtime Polymorphism We can explain compile-time polymorphism through method overloading. Compile-time polymorphism allows us to have more than one method share the same name with different signatures and different return types. We can explain run-time polymorphism through method overriding.
Because in the general case, it's impossible at compile time to determine what type it will be at run time. Your example can be resolved at compile time (see answer by @Quentin), but cases can be constructed that can't, such as:
Base *bp;
if (rand() % 10 < 5)
bp = new Derived;
else
bp = new Base;
bp->show(); // only known at run time
EDIT: Thanks to @nwp, here's a much better case. Something like:
Base *bp;
char c;
std::cin >> c;
if (c == 'd')
bp = new Derived;
else
bp = new Base;
bp->show(); // only known at run time
Also, by corollary of Turing's proof, it can be shown that in the general case it's mathematically impossible for a C++ compiler to know what a base class pointer points to at run time.
Assume we have a C++ compiler-like function:
bool bp_points_to_base(const string& program_file);
That takes as its input program_file
: the name of any C++ source-code text file where a pointer bp
(as in the OP) calls its virtual
member function show()
. And can determine in the general case (at sequence point A
where the virtual
member function show()
is first invoked through bp
): whether the pointer bp
points to an instance of Base
or not.
Consider the following fragment of the C++ program "q.cpp":
Base *bp;
if (bp_points_to_base("q.cpp")) // invokes bp_points_to_base on itself
bp = new Derived;
else
bp = new Base;
bp->show(); // sequence point A
Now if bp_points_to_base
determines that in "q.cpp": bp
points to an instance of Base
at A
then "q.cpp" points bp
to something else at A
. And if it determines that in "q.cpp": bp
doesn't point to an instance of Base
at A
, then "q.cpp" points bp
to an instance of Base
at A
. This is a contradiction. So our initial assumption is incorrect. So bp_points_to_base
can't be written for the general case.
Compilers routinely devirtualise such calls, when the static type of the object is known. Pasting your code as-is into Compiler Explorer produces the following assembly:
main: # @main
pushq %rax
movl std::cout, %edi
movl $.L.str, %esi
movl $12, %edx
callq std::basic_ostream<char, std::char_traits<char> >& std::__ostream_insert<char, std::char_traits<char> >(std::basic_ostream<char, std::char_traits<char> >&, char const*, long)
xorl %eax, %eax
popq %rdx
retq
pushq %rax
movl std::__ioinit, %edi
callq std::ios_base::Init::Init()
movl std::ios_base::Init::~Init(), %edi
movl std::__ioinit, %esi
movl $__dso_handle, %edx
popq %rax
jmp __cxa_atexit # TAILCALL
.L.str:
.asciz "In Derived \n"
Even if you cannot read assembly, you can see that only "In Derived \n"
is present in the executable. Not only has dynamic dispatch been optimized out, so has the whole base class.
Why does this code cause run time polymorphism and why can not it be solved in compile time?
What makes you think that it does?
You are making a common assumption: just because the language identifies this case as using run time polymorphism does not mean that an implementation is held to dispatching at run time. The C++ Standard has a so-called "as-if" rule: the observable effects of the C++ Standard rules are described with regard to an abstract machine, and implementations are free to achieve said observable effects however they wish.
Actually, devirtualization is the general word used to speak about compiler optimizations aiming at resolving calls to virtual methods at compile-time.
The goal is not so much to shave off the nearly unnoticeable virtual call overhead (if branch prediction works well), it is about removing a black box. The best gains, in terms of optimizations, are keyed on inlining the calls: this opens up constant propagation and a whole lot of optimization, and inlining can only be achieved when the body of the function being called is known at compile-time (since it involved removing the call and replacing it by the function body).
Some devirtualization opportunities:
final
method or a virtual
method of a final
class are trivially devirtualizedvirtual
method of a class defined in an anonymous namespace may be devirtualized if that class is a leaf in the hierarchyvirtual
method via a base class may be devirtualized if the dynamic type of the object can be established at compile time (which is the case of your example, with the construction being in the same function)For the state of the art, however, you will want to read Honza Hubička's Blog. Honza is a gcc developer and last year he worked on speculative devirtualization: the goal is to compute the probabilities of the dynamic type being either A, B or C and then speculatively devirtualize the calls somewhat like transforming:
Base& b = ...;
b.call();
into:
Base& b = ...;
if (b.vptr == &VTableOfA) { static_cast<A&>(b).call(); }
else if (b.vptr == &VTableOfB) { static_cast<B&>(b).call(); }
else if (b.vptr == &VTableOfC) { static_cast<C&>(b).call(); }
else { b.call(); } // virtual call as last resort
Honza did a 5-part post:
There are many reasons why compilers cannot in general replace the runtime decision with static calls, mostly because it involves information not available at compile time, e.g. configuration or user input. Aside from that, I want to point out two additional reasons why this is not possible in general.
First, the C++ compilation model is based on separate compilation units. When one unit is compiled, the compiler only knows what is defined in the source file(s) being compiled. Consider a compilation unit with a base class and a function taken a reference to the base class:
struct Base {
virtual void polymorphic() = 0;
};
void foo(Base& b) {b.polymorphic();}
When compiled separately, the compiler has no knowledge about the types that implement Base
and thus cannot remove the dynamic dispatch. It also not something we want because we want to able to extend the program with new functionality by implementing the interface. It may be possible to do that at link time, but only under the assumption that the program is fully complete. Dynamic libraries can break this assumption, and as can be seen in the following, there will always be cases were it is not possible at all.
A more fundamental reason comes from Computability theory. Even with complete information, it is not possible to define an algorithm that computes if a certain line in a program will be reached or not. If you could you could solve the Halting Problem: for a program P
, I create a new program P'
by adding an additional line to the end of P
. The algorithm would now be able to decide if that line is reached, which solves the Halting Problem.
Being unable to decide in general means that compilers cannot decide which value is assigned to variables in general, e.g.
bool someFunction( /* arbitrary parameters */ ) {
// ...
}
// ...
Base* b = nullptr;
if (someFunction( ... ))
b = new Derived1();
else
b = new Derived2();
b->polymorphicFunction();
Even when all parameters are known at compile time, it is not possible to prove in general which path through the program will be taken and which static type b
will have. Approximations can and are made by optimizing compilers, but there are always cases where it does not work.
Having said that, C++ compilers try very hard to remove dynamic dispatch because it opens many other optimization chances mainly from being able to inline and propagate knowledge through the code. If you are interesting, you can find an interesting serious of blog posts for the GCC devirtualization implementation.
That could easily be resolved at compile time if the optimizer chose to do so.
The standard specifies the same behavior as if the run-time polymorphism had occurred. It does not specific that be achieved through actual run-time polymorphism.
Basically the compiler should be able to figure out that this should not result in runtime polymorphism in your very simple case. Most probably there are compilers that actually do that but that is mostly a conjecture.
The problematic is the General case when you are actually building a complex, and apart of the cases with library dependencies, or complexity of analysing post-compilation multiple compilation units, which would require keeping multiple versions of the same code, which would blow out AST generation, the real issue boils down to decidability and the halting problem.
The latter does not permit to solve the problem if a call can be devirtualized in the general case.
The halting problem is to decide if a program given an input will halt ( we say the program-input pair halts). It is known that there is no general algorithm , e.g. a compiler, that solves for all possible program-input pairs.
In order for the compiler to decide for any program if a call should be made virtual or not, it should be able to decide that for all possible program-input pairs.
In order to do that the compiler would need to have an algorithm A that decides that given program P1 and program P2 where P2 makes a virtual call, then program P3 { while( {P1,I} != {P2,I} ) } halts for any input I.
Thus the compiler to be able to figure out all possible devirtualization should be able to decide that for any pair (P3,I) over all possible P3 and I;which is undecidable for all because A does not exist. However it can be decided for specific cases that can be eye-balled.
That is why in your case the call can be devirtualized, but not any case.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With