After I upgraded my projects to .NET 4.0 (With VS2010) I realized than they run slower than they were in .NET 2.0 (VS2008). So i decided to benchmark a simple console application in both VS2008 & VS2010 with various Target Frameworks:
using System;
using System.Diagnostics;
using System.Reflection;
namespace RuntimePerfTest
{
class Program
{
static void Main(string[] args)
{
Console.WriteLine(Assembly.GetCallingAssembly().ImageRuntimeVersion);
Stopwatch sw = new Stopwatch();
while (true)
{
sw.Reset();
sw.Start();
for (int i = 0; i < 1000000000; i++)
{
}
TimeSpan elapsed = sw.Elapsed;
Console.WriteLine(elapsed);
}
}
}
}
Here is the results:
My initial conclusion is obviously that programs compiled with VS2008 working faster than programs compiled with VS2010.
Can anyone explain those performance changes between VS2008 and VS2010? and between different Target Frameworks inside VS2010 itself?
Just like with previous versions. In fact, 4.0 uses completely different directories to store its assemblies in. Uninstalling will prevent you from using older programs. It is however possible to run 2.0+ software using the 4.0 runtime, with some extra configuration in the application config file.
A cross-platform and open-source framework, . NET Core is best when developing applications on any platform. . NET Core is used for cloud applications or refactoring large enterprise applications into microservices. You should use .
if you have 4.8 then you have 4.7. 2 automatically.
NET Framework 4.8. To determine the installed . NET version, use the following Release DWORD: 528449 (Windows 11 and Windows Server 2022)
I think I've got it.
If you're running on a 64 bit machine, make sure the build is set to "Any CPU" rather than "x86". Doing that fixed the issue on my machine.
The default for new projects changed in VS2010 from "Any CPU" to "x86" - I believe this was to make Edit and Continue work by default on 64 bit machines (as it only supports x86).
Running an x86 process on a 64 bit machine is obviously somewhat suboptimal.
EDIT: As per Dustin's comments, running x86 rather than x64 can have performance advantages in terms of more efficient use of memory (shorter references).
I also corresponded with Dustin about this by email, and he included these reasons:
FWIW, the default target platform wasn’t changed to support ENC. We had already shipped ENC broken on x64 for 2 releases. So by itself, ENC wasn’t really a compelling reason to switch. The primary reasons we switched (in no particular order) were:
IntelliTrace is not supported on x64. So, one of the coolest new features won’t work on x64 Windows for Any CPU projects.
x64 EXEs run slower on x64 Windows than x86 EXEs do. So, the idea of x86 debug, x64 release would mean that “optimized” builds in Release would actually perform worse.
Customer complaints when deploying an application and finding that it doesn’t work, even though it worked on their machine. These were often around P/Invoke, but there any many other assumptions that can be made in an application that can break when run with different bitness.
The above reasons coupled with the fact that an Any CPU brings no benefits (i.e. you can’t actually take advantage of the expanded address space because the EXE may still run on x86) was the reason that the default was switched.
Rick Byers has an excellent post on this topic here.
I believe your benchmark is flawed. The IL code from VS 2008 and VS 2010 for your sample program is identical in release mode (VS 2008 targeting .NET 2.0 and VS 2010 targeting .NET 4.0 with default settings). Therefore you should not see a difference in timings between VS 2008 and VS 2010. Both compilers emit the following code:
.method private hidebysig static void Main(string[] args) cil managed
{
.entrypoint
// Code size 69 (0x45)
.maxstack 2
.locals init ([0] class [System]System.Diagnostics.Stopwatch sw,
[1] int32 i,
[2] valuetype [mscorlib]System.TimeSpan elapsed)
IL_0000: call class [mscorlib]System.Reflection.Assembly [mscorlib]System.Reflection.Assembly::GetCallingAssembly()
IL_0005: callvirt instance string [mscorlib]System.Reflection.Assembly::get_ImageRuntimeVersion()
IL_000a: call void [mscorlib]System.Console::WriteLine(string)
IL_000f: newobj instance void [System]System.Diagnostics.Stopwatch::.ctor()
IL_0014: stloc.0
IL_0015: ldloc.0
IL_0016: callvirt instance void [System]System.Diagnostics.Stopwatch::Reset()
IL_001b: ldloc.0
IL_001c: callvirt instance void [System]System.Diagnostics.Stopwatch::Start()
IL_0021: ldc.i4.0
IL_0022: stloc.1
IL_0023: br.s IL_0029
IL_0025: ldloc.1
IL_0026: ldc.i4.1
IL_0027: add
IL_0028: stloc.1
IL_0029: ldloc.1
IL_002a: ldc.i4 0x3b9aca00
IL_002f: blt.s IL_0025
IL_0031: ldloc.0
IL_0032: callvirt instance valuetype [mscorlib]System.TimeSpan [System]System.Diagnostics.Stopwatch::get_Elapsed()
IL_0037: stloc.2
IL_0038: ldloc.2
IL_0039: box [mscorlib]System.TimeSpan
IL_003e: call void [mscorlib]System.Console::WriteLine(object)
IL_0043: br.s IL_0015
} // end of method Program::Main
One thing that might be different is the platform target. VS 2010 uses x86
as the default platform target whereas VS 2008 uses AnyCPU
. If you are on a 64-bit system this will result in different JIT compilers being used for the VS 2008 vs. VS 2010 builds. This might lead to different results as the JIT compilers are developed separately.
I agree that the benchmark is a flawed.
Most of the additional time is likely taken by the WoW layer in the x86 cases. However, the inherent inefficiencies of an x64 process would very likely outweigh the overhead of the WoW layer in a longer benchmark that actually touches memory. In fact, if the benchmark were to access memory (by creating and accessing objects on the heap), you'd see the WoW layers pointer optimization benefits.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With