My understand is that specifying variance for generics in C# happens at the type declaration level: when you're creating your generic type, you specify the variance for the type arguments. In Java, on the other hand, variance is specified where a generic is used: when you create a variable of some generic type, you specify how its type arguments can vary.
What are the pros and cons to each option?
I am just going to answer the differences between declaration-site and use-site variance, since, while C# and Java generics differ in many other ways, those differences are mostly orthogonal to variance.
First off, if I remember correctly use-site variance is strictly more powerful than declaration-site variance (although at the cost of concision), or at least Java's wildcards are (which are actually more powerful than use-site variance). This increased power is particularly useful for languages in which stateful constructs are used heavily, such as C# and Java (but Scala much less so, especially since its standard lists are immutable). Consider List<E>
(or IList<E>
). Since it has methods for both adding E's and getting E's, it is invariant with respect to E, and so declaration-site variance cannot be used. However, with use-site variance you can just say List<+Number>
to get the covariant subset of List
and List<-Number>
to get the contravariant subset of List
. In a declaration-site language the designer of the library would have to make separate interfaces (or classes if you allow multiple inheritance of classes) for each subset and have List
extend those interfaces. If the library designer does not do this (note that C#'s IEnumerable
only does a small subset of the covariant portion of IList
), then you're out of luck and you have to resort to the same hassles you have to do in a language without any sort of variance.
So that's the advantages of use-site inheritance over declaration-site inheritance. The advantage of declaration-site inheritance over use-site inheritance is basically concision for the user (provided the designer went through the effort of separating every class/interface into its covariant and contravariant portions). For something like IEnumerable
or Iterator
, it's nice not to have to specify covariance every single time you use the interface. Java made this especially annoying by using a lengthy syntax (except for bivariance for which Java's solution is basically ideal).
Of course, these two language features can coexist. For type parameters that are naturally covariant or contravariant (such as in IEnumerable
/Iterator
), declare so in the declaration. For type parameters that are naturally invariant (such as in (I)List
), declare what kind of variance you want each time you use it. Just don't specify a use-site variance for arguments with a declaration-site variance as that just makes things confusing.
There are other more detailed issues I haven't gone into (such as how wildcards are actually more powerful than use-site variance), but I hope this answers your question to your content. I'll admit I'm biased towards use-site variance, but I tried to portray the major advantages of both that have come up in my discussions with programmers and with language researchers.
Most people seem to prefer declaration-site variance, because it makes it easier for users of the library (while making it a bit harder for the library developer, although I would argue that the library developer has to think about variance regardless of where the variance is actually written.)
But keep in mind, that neither Java nor C# are examples of good language design.
While Java got variance right and working independently of the JVM because of compatible VM improvements in Java 5 and type-erasure, the use-site variance makes usage a bit cumbersome and the particular implementation of type-erasure has drawn well-deserved criticism.
C#'s model of declaration-site variance takes the burden away from the user of the library, but during their introduction of reified generics they basically built the variance rules into the their VM. Even today they can't fully support co-/contravariance because of this mistake (and the non backward-compatible introduction of the reified collection classes has split the programmers into two camps).
This poses a difficult restriction on all languages targeting the CLR and is one reason why alternative programming languages are much more lively on the JVM although it seems that the CLR has "much nicer features".
Let's look at Scala: Scala is an fully object-oriented, functional hybrid running on the JVM. They use type erasure like Java, but both the implementation of Generics and the (declaration-site) variance are easier to understand, more straightforward and powerful than Java's (or C#'s), because the VM doesn't impose rules on how variance has to work. The Scala compiler checks the variance notations and can reject unsound source code at compile time instead of throwing exceptions at runtime, while the resulting .class files can seamlessly be used from Java.
One disadvantage of declaration site variance is that it seems to make type inference harder in some cases.
At the same time Scala can use primitive types without boxing them in collections like in C# by using the @specialized
annotation which tells the Scala compiler to generate one or multiple additional implementations of a class or method specialized to the requested primitive type.
Scala can also "almost" reify generics by using Manifests which allows them to retrieve the generic types at runtime like in C#.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With