I found this strange behaviour in .NET and even after looking into CLR via C# again I am still confused. Let's assume we have an interface with one method and a class that imlements it:
interface IFoo
{
void Do();
}
class TheFoo : IFoo
{
public void Do()
{
//do nothing
}
}
Then we want just to instantiate this class and call this Do() method a lot of times in two ways: using concrete class variable and using an interface variable:
TheFoo foo1 = new TheFoo();
Stopwatch stopwatch = new Stopwatch();
stopwatch.Start();
for (long i = 0; i < 1000000000; i++)
foo1.Do();
stopwatch.Stop();
Console.Out.WriteLine("Elapsed time: " + stopwatch.ElapsedMilliseconds);
IFoo foo2 = foo1;
stopwatch = new Stopwatch();
stopwatch.Start();
for (long i = 0; i < 1000000000; i++)
foo2.Do();
stopwatch.Stop();
Console.Out.WriteLine("Elapsed time: " + stopwatch.ElapsedMilliseconds);
Surprisingly (at least to me) the elapsed times are about 10% different:
Elapsed time: 6005
Elapsed time: 6667
The difference is not that much, so I would not worry a lot about this in most cases. However I just can't figure out why this happens even after looking in IL code, so I would appreciate if somebody point me to something obvious that I am missing.
C programming language is a machine-independent programming language that is mainly used to create many types of applications and operating systems such as Windows, and other complicated programs such as the Oracle database, Git, Python interpreter, and games and is considered a programming foundation in the process of ...
C is a general-purpose language that most programmers learn before moving on to more complex languages. From Unix and Windows to Tic Tac Toe and Photoshop, several of the most commonly used applications today have been built on C. It is easy to learn because: A simple syntax with only 32 keywords.
Full form of C is “COMPILE”. One thing which was missing in C language was further added to C++ that is 'the concept of CLASSES'.
Because a and b and c , so it's name is C. C came out of Ken Thompson's Unix project at AT&T. He originally wrote Unix in assembly language. He wrote a language in assembly called B that ran on Unix, and was a subset of an existing language called BCPL.
You have to look at the machine code to see what is going on. When you do, you'll see that the jitter optimizer has completely removed the call to foo1.Do(). Small methods like that get inlined by the optimizer. Since the body of the method contains no code, no machine code is generated at all. It cannot make the same optimization on the interface call, it is not quite smart enough to reverse-engineer that the interface method pointer actually points to an empty method.
Check this answer for a list of the common optimizations performed by the jitter. Note the warnings about profiling mentioned in that answer.
NOTE: looking at the machine code in release build requires changing an option. By default the optimizer is disabled when you debug code, even in the release build. Tools + Options, Debugging, General, untick "Suppress JIT optimization on module load".
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With