Is there a technical reason why there is no implicit conversion from DBNull to the various nullable and/or sql types? I understand why the conversions don't currently happen, but don't understand why an implicit conversion wasn't created at the time or added in subsequent versions of the framework.
Just to be clear, I'm looking for technical reasons, not "because that's they way they did it" or "I like it that way".
The most straightforward explanation comes from Microsoft's documentation:
CLR nullable types are not intended for storage of database nulls because an ANSI SQL null does not behave the same way as a null reference (or Nothing in Visual Basic).
Well, I don't know about the SqlTypes
case, but there definitely are some technical reasons why adding an implicit conversion between DBNull.Value
and values of Nullable<T>
with HasValue = false
wouldn't work.
Remember, DBNull
is a reference type, and despite the fact that Nullable<T>
acts like a reference type -- by pretending to be able to take on the null
value -- it's actually a value type, with value semantics.
In particular, there's a weird edge case when values of type Nullable<T>
are boxed. The behavior is special-cased in the runtime to box values of type Nullable<T>
to a boxed version of T
, not a boxed version of Nullable<T>
.
As the MSDN documentation explains it:
When a nullable type is boxed, the common language runtime automatically boxes the underlying value of the Nullable(Of T) object, not the Nullable(Of T) object itself. That is, if the HasValue property is true, the contents of the Value property is boxed. When the underlying value of a nullable type is unboxed, the common language runtime creates a new Nullable(Of T) structure initialized to the underlying value.
If the HasValue property of a nullable type is false, the result of a boxing operation is Nothing. Consequently, if a boxed nullable type is passed to a method that expects an object argument, that method must be prepared to handle the case where the argument is Nothing. When Nothing is unboxed into a nullable type, the common language runtime creates a new Nullable(Of T) structure and initializes its HasValue property to false.
Now we get into a tricky problem: the C# language spec (§4.3.2) says we can't use an unboxing conversion to convert DBNull.Value
into Nullable<T>
:
For an unboxing conversion to a given nullable-type to succeed at run-time, the value of the source operand must be either null or a reference to a boxed value of the underlying non-nullable-value-type of the nullable-type. If the source operand is a reference to an incompatible object, a
System.InvalidCastException
is thrown.
And we can't use a user-defined conversion to convert from object
to Nullable<T>
, either, according to §10.10.3:
It is not possible to directly redefine a pre-defined conversion. Thus, conversion operators are not allowed to convert from or to
object
because implicit and explicit conversions already exist betweenobject
and all other types.
OK, you or I couldn't do it, but Microsoft could just amend the spec, and make it legal, right? I don't think so.
Why? Well, imagine the intended use case: you've got some method that is specified to return object
. In practice, it either returns DBNull.Value
or int
. But how could the compiler know that? All it knows is that the method is specified to return object
. And the conversion operator to be applied must be selected at compile time.
OK, so assume that there is some magical operator that can convert from object
to Nullable<T>
, and the compiler has some way of knowing when it is applicable. (We don't want it to used for every method that is specified to return object
-- what should it do if the method actually returns a string
?) But we still have an issue: the conversion could be ambiguous! If the method returns either long
, or DBNull.Value
, and we do int? v = Method();
, what should we do when the method returns a boxed long
?
Basically, to make this work as intended, you'd have to use the equivalent of dynamic
to determine the type at runtime and convert based on the runtime type. But then we've broken another rule: since the actual conversion would only be selected at runtime, there's no guarantee that it would actually succeed. But implicit conversions are not supposed to throw exceptions.
So at this point, it's not only a change to the specified behavior of the language, a potentially significant performance hit, and on top of that it could throw an unexpected exception! That seems like a pretty good reason not to implement it. But if you need one more reason, remember that every feature starts out minus 100 points.
In short: what you really want here can't be done with an implicit conversion anyway. If you want the behavior of dynamic
, just use dynamic
! This does what you want, and is already implemented in C# 4.0:
object x = 23;
object y = null;
dynamic dx = x;
dynamic dy = y;
int? nx = (int?) dx;
int? ny = (int?) dy;
Console.WriteLine("nx.HasValue = {0}; nx.Value = {1}; ny.HasValue = {2};",
nx.HasValue, nx.Value, ny.HasValue);
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With