Take this small LINQPad example:
void Main()
{
Foo<object> foo = new Foo<string>();
Console.WriteLine(foo.Get());
}
class Foo<out T>
{
public T Get()
{
return default(T);
}
}
It fails to compile with this error:
Invalid variance modifier. Only interface and delegate type parameters can be specified as variant.
I don't see any logical problem with the code. Everything can be statically verified. Why is this not allowed? Would it cause some inconsistency in the language, or was it deemed too expensive to implement due to a limitation in the CLR? If it is the latter, what should I as a developer know about said limitation?
Considering that interfaces support it, I would have expected class support to logically follow from that.
Both Java and Python have the concept of a "string", C does not have the concept of a "string". C has character arrays which can come in "read only" or manipulatable. A character array is a sequence of contiguous characters with a unique sentinel character at the end (normally a NULL terminator '\0' ).
As such, C programming does not provide direct support for error handling but being a system programming language, it provides you access at lower level in the form of return values. Most of the C or even Unix function calls return -1 or NULL in case of any error and set an error code errno.
The C programming language doesn't seem to have an expiration date. It's closeness to the hardware, great portability and deterministic usage of resources makes it ideal for low level development for such things as operating system kernels and embedded software.
This is due to the fact that C++ does not do bounds checking. Languages like Java and python have bounds checking so if you try to access an out of bounds element, they throw an error. C++ design principle was that it shouldn't be slower than the equivalent C code, and C doesn't do array bounds checking.
One reason would be:
class Foo<out T>
{
T _store;
public T Get()
{
_store = default(T);
return _store;
}
}
This class contains a feature that is not covariant, because it has a field, and fields can be set to values. It is though used in a covariant way, because it is only ever assigned the default value and that is only ever going to be null
for any case where covariance is actually used.
As such it's not clear if we could allow it. Not allowing it would irritate users (it does after all match the same potential rules you suggest), but allowing it is difficult (the analysis has gotten slightly tricky already and we're not that even beginning to hunt for really tricky cases).
On the other hand, the analysis of this is much simpler:
void Main()
{
IFoo<object> foo = new Foo<string>();
Console.WriteLine(foo.Get());
}
interface IFoo<out T>
{
T Get();
}
class Foo<T> : IFoo<T>
{
T _store;
public T Get()
{
_store = default(T);
return _store;
}
}
It's easy to determine that none of the implementation of IFoo<T>
breaks the covariance, because it hasn't got any. All that's necessary is to make sure that there is no use of T
as a parameter (including that of a setter method) and it's done.
The fact that the potential restriction is a lot more arduous on a class than on an interface for similar reasons, also reduces the degree to which covariant classes would be useful. They certainly wouldn't be useless, but the balance of how useful they would be over how much work it would be to specify and implement the rules about what they would be allowed to do is much less than the balance of how useful covariant interfaces are over how over how much work it was to specify and implement them.
Certainly, the difference is enough that it's past the point of "well, if you're going to allow X it would be silly to not allow Y…".
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With