When I compare nullable short values, the compiler converts them first to integer to make a compare with null. For example, consider this simple code:
short? cTestA; if (cTestA == null) { ... }
It is converted by the compiler to:
short? CS$0$0001 = cTestA; int? CS$0$0002 = CS$0$0001.HasValue ? new int?(CS$0$0001.GetValueOrDefault()) : null; if (!CS$0$0002.HasValue){ ... }
This happens for all .NET versions including .NET 4.
What am I missing here? What is the reason for this double conversion just for a HasValue check?
What I expect the compiler to do is to make a simple check with .HasValue, if (cTestA.HasValue){}
. At least this is what I do in my code after I discover this conversion.
Why is all this extra code added for such a simple test?
As seen in the above example, int cannot be null. On the other hand, Integer is an object which can be checked for the null property.
Java primitive types (such as int , double , or float ) cannot have null values, which you must consider in choosing your result expression and host expression types.
In other words, null can be cast to Integer without a problem, but a null integer object cannot be converted to a value of type int.
Value types like int contain direct values rather than references like ReferenceType . A value cannot be null but a reference can be null which means it is pointing to nothing. A value cannot have nothing in it.
Re: your latest update:
This is a bug in the nullable arithmetic optimizer.
The nullable optimizer will remove the unnecessary conversion to int?
when you do something like:
short? s = null; int? x = s + 1;
The unoptimized codegen does the equivalent of:
short? s = null; int? x; int? temp = s.HasValue ? new int?((int)s.Value) : new int?(); x = temp.HasValue ? new int?(x.Value + 1) : new int?();
The optimized codegen does the equivalent of:
short? s = null; int? x; x = s.HasValue ? new int?((int)s.Value + 1) : new int?();
However, the optimizer contains a bug; we do not remove the unnecessary conversion for equality.
Thanks for bringing it to my attention; we'll fix it for Roslyn. I'm actually about to write the nullable optimizer for Roslyn in the next couple of weeks.
UPDATE: I did write that optimizer, and if you are interested in how it works, I wrote a series of articles on it which starts here:
http://ericlippert.com/2012/12/20/nullable-micro-optimizations-part-one/
See section 4.1.5 of the C# 4.0 language specification. Particularly, of interest:
C# supports nine integral types: sbyte, byte, short, ushort, int, uint, long, ulong, and char. [omitted text]
The integral-type unary and binary operators always operate with signed 32-bit precision, unsigned 32-bit precision, signed 64-bit precision, or unsigned 64-bit precision:
[omitted bullet points]
For the binary +, –, *, /, %, &, ^, |, ==, !=, >, <, >=, and <= operators, the operands are converted to type T, where T is the first of int, uint, long, and ulong that can fully represent all possible values of both operands. The operation is then performed using the precision of type T, and the type of the result is T (or bool for the relational operators). It is not permitted for one operand to be of type long and the other to be of type ulong with the binary operators.
Operations using short are promoted to int, and those operations are lifted for their nullable counterparts. (This leads to sections 7.3.6.2 and 7.3.7)
Ok, this is by Design, but still don't understand the why they do that, they have optimize the string adds too much, why left numbers alone and add more code for this simple comparing
That is simply the way the language is designed, with a consideration for optimizations in modern architecture. Not specifically in this context, but consider the words of Eric Lippert as stated here
Arithmetic is never done in shorts in C#. Arithmetic can be done in ints, uints, longs and ulongs, but arithmetic is never done in shorts. Shorts promote to int and the arithmetic is done in ints, because like I said before, the vast majority of arithmetic calculations fit into an int. The vast majority do not fit into a short. Short arithmetic is possibly slower on modern hardware which is optimized for ints, and short arithmetic does not take up any less space; it's going to be done in ints or longs on the chip.
Your latest update:
What I expect to compiler to do is to make a simple check with .HasValue if (cTestA.HasValue){} at least this is what I do on my code after I discover this conversion. So this is what I really do not understand why not do that simple think but add all this extra code. The compiler always try to optimize the code - why here avoid that simple .HasValue check. I am missing something here for sure...
I will have to defer to a compiler expert to say why they elect to go for the conversion instead of the immediate HasValue check, except to say there could simply be an order of operations. The language specification says binary operator operands are promoted, and that's what they have done in the provided snippet. The language specification goes on to later say that checks with x == null
, where x is a nullable value type, can be converted to !x.HasValue
, and that is also what they have done. In the compiled code you have presented, the numeric promotion simply took precedent over the nullable behavior.
As for the compiler always trying to optimize the code, again, an expert can clarify, but this is not the case. There are optimizations it can make, and others it defers to perhaps the jitter. There are optimizations that either the compiler or the jitter may or may not make depending upon whether it is a debug versus release build, with or without a debugger attached. And undoubtedly there are optimizations they could make that they simply elect not to, because the costs versus the benefits do not play out.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With