I am updating some old code, and have found several instances where the same object is being cast repeatedly each time one of its properties or methods needs to be called. Example:
if (recDate != null && recDate > ((System.Windows.Forms.DateTimePicker)ctrl).MinDate)
{
((System.Windows.Forms.DateTimePicker)ctrl).CustomFormat = "MM/dd/yyyy";
((System.Windows.Forms.DateTimePicker)ctrl).Value = recDate;
}
else
{
(System.Windows.Forms.DateTimePicker)ctrl).CustomFormat = " ";
}
((System.Windows.Forms.DateTimePicker)ctrl).Format = DateTimePickerFormat.Custom;
My inclination is to fix this monstrosity, but given my limited time I don't want to bother with anything that's not affecting functionality or performance.
So what I'm wondering is, are these redundant casts getting optimized away by the compiler? I tried figuring it out myself by using ildasm on a simplified example, but not being familiar with IL I only ended up more confused.
UPDATE
So far, the consensus seems to be that a)no, the casts are not optimized, but b)while there may possibly be some small performance hit as a result, it is not likely significant, and c)I should consider fixing them anyway. I have come down on the side of resolving to fix these someday, if I have time. Meanwhile, I won't worry about them.
Thanks everyone!
A spot check on the generated machine code in the Release build shows that the x86 jitter doesn't optimize the cast away.
You have to look at the big picture here though. You are assigning properties of a control. They have a ton of side-effects. In the case of DateTimePicker, the assignment results in a message being sent to the native Windows control. Which in turn crunches away at the message. The cost of the cast is negligible to the cost of the side effects. Rewriting the assignments is never going to make a noticeable difference in speed, you only made it a fraction of a percent faster.
Go ahead and do rewrite the code on a lazy Friday afternoon. But only because it is a blight to readability. That poorly readable C# code also produces poorly optimized machine code is not entirely a coincidence.
It is not optimized away from IL in either debug or release builds.
simple C# test:
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
namespace RedundantCastTest
{
class Program
{
static object get()
{ return "asdf"; }
static void Main(string[] args)
{
object obj = get();
if ((string)obj == "asdf")
Console.WriteLine("Equal: {0}, len: {1}", obj, ((string)obj).Length);
}
}
}
Corresponding IL (note the multiple castclass
instructions):
.method private hidebysig static void Main(string[] args) cil managed
{
.entrypoint
.maxstack 3
.locals init (
[0] object obj,
[1] bool CS$4$0000)
L_0000: nop
L_0001: call object RedundantCastTest.Program::get()
L_0006: stloc.0
L_0007: ldloc.0
L_0008: castclass string
L_000d: ldstr "asdf"
L_0012: call bool [mscorlib]System.String::op_Equality(string, string)
L_0017: ldc.i4.0
L_0018: ceq
L_001a: stloc.1
L_001b: ldloc.1
L_001c: brtrue.s L_003a
L_001e: ldstr "Equal: {0}, len: {1}"
L_0023: ldloc.0
L_0024: ldloc.0
L_0025: castclass string
L_002a: callvirt instance int32 [mscorlib]System.String::get_Length()
L_002f: box int32
L_0034: call void [mscorlib]System.Console::WriteLine(string, object, object)
L_0039: nop
L_003a: ret
}
Neither is it optimized from the IL in the release build:
.method private hidebysig static void Main(string[] args) cil managed
{
.entrypoint
.maxstack 3
.locals init (
[0] object obj)
L_0000: call object RedundantCastTest.Program::get()
L_0005: stloc.0
L_0006: ldloc.0
L_0007: castclass string
L_000c: ldstr "asdf"
L_0011: call bool [mscorlib]System.String::op_Equality(string, string)
L_0016: brfalse.s L_0033
L_0018: ldstr "Equal: {0}, len: {1}"
L_001d: ldloc.0
L_001e: ldloc.0
L_001f: castclass string
L_0024: callvirt instance int32 [mscorlib]System.String::get_Length()
L_0029: box int32
L_002e: call void [mscorlib]System.Console::WriteLine(string, object, object)
L_0033: ret
}
Neither case means that the casts don't get optimized when native code is generated - you'd need to look at the actual machine assembly there. i.e. by running ngen and disassembling. I'd be greatly surprised if it wasn't optimized away.
Regardless, I'll cite The Pragmatic Programmer and the broken window theorem: When you see a broken window, fix it.
I have never heard of or seen redundant cast optimizations on the CLR. Lets try a contrived example
object number = 5;
int iterations = 10000000;
int[] storage = new int[iterations];
var sw = Stopwatch.StartNew();
for (int i = 0; i < iterations; i++) {
storage[i] = ((int)number) + 1;
storage[i] = ((int)number) + 2;
storage[i] = ((int)number) + 3;
}
Console.WriteLine(sw.ElapsedTicks);
storage = new int[iterations];
sw = Stopwatch.StartNew();
for (int i = 0; i < iterations; i++) {
var j = (int)number;
storage[i] = j + 1;
storage[i] = j + 2;
storage[i] = j + 3;
}
Console.WriteLine(sw.ElapsedTicks);
Console.ReadLine();
On my machine, running under release, I am consistantly getting about 350k ticks for redundant redundancy and 280k ticks for self optimzation. So no, it looks like the CLR does not optimize for this.
No; FxCop flags this as a performance warning. See info here: http://msdn.microsoft.com/en-us/library/ms182271.aspx
I'd recommend running that over your code if you want to find things to fix.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With