Considering the following code:
namespace MyApp
{
using System;
using System.Collections.ObjectModel;
class Program
{
static void Main(string[] args)
{
var col = new MyCollection();
col.Add(new MyItem { Enum = MyEnum.Second });
col.Add(new MyItem { Enum = MyEnum.First });
var item = col[0];
Console.WriteLine("1) Null ? {0}", item == null);
item = col[MyEnum.Second];
Console.WriteLine("2) Null ? {0}", item == null);
Console.ReadKey();
}
}
class MyItem { public MyEnum Enum { get; set; } }
class MyCollection : Collection<MyItem>
{
public MyItem this[MyEnum val]
{
get
{
foreach (var item in this) { if (item.Enum == val) return item; }
return null;
}
}
}
enum MyEnum
{
Default = 0,
First,
Second
}
}
I was surprised to see the following result:
1) Null ? True
2) Null ? False
My first expectation was that because I was passing an int
, the default indexer should be used, and the first call should have succeeded.
Instead, it seems that the overload expecting an enum
is always called (even when casting 0 as int), and the test fails.
EDIT : A workaround seems to be casting the collection as Collection, see this answer.
So:
Explanation
With this code we are facing two problems:
For more precise (and better formulated) answers, see the following links:
The various answers here have sussed it out. To sum up and provide some links to explanatory material:
First, the literal zero is convertible to any enum type. The reason for this is because we want you to be able to initialize any "flags" enum to its zero value even if there is no zero enum value available. (If we had to do it all over again we'd probably not implement this feature; rather, we'd say to just use the default(MyEnum)
expression if you want to do that.)
In fact, the constant, not just the literal constant zero is convertible to any enum type. This is for backwards compatibility with a historic compiler bug that is more expensive to fix than to enshrine.
For more details, see
http://blogs.msdn.com/b/ericlippert/archive/2006/03/28/the-root-of-all-evil-part-one.aspx
http://blogs.msdn.com/b/ericlippert/archive/2006/03/29/the-root-of-all-evil-part-two.aspx
That then establishes that your two indexers -- one which takes an int and one which takes an enum -- are both applicable candidates when passed the literal zero. The question then is which is the better candidate. The rule here is simple: if any candidate is applicable in a derived class then it is automatically better than any candidate in a base class. Therefore your enum indexer wins.
The reason for this somewhat counter-intuitive rule is twofold. First, it seems to make sense that the person who wrote the derived class has more information than the person who wrote the base class. They specialized the base class, after all, so it seems reasonable that you'd want to call the most specialized implementation possible when given a choice, even if it is not an exact match.
The second reason is that this choice mitigates the brittle base class problem. If you added an indexer to a base class that happened to be a better match than one on a derived class, it would be unexpected to users of the derived class that code that used to choose the derived class suddenly starts choosing the base class.
See
http://blogs.msdn.com/b/ericlippert/archive/2007/09/04/future-breaking-changes-part-three.aspx
for more discussion of this issue.
As James correctly points out, if you make a new indexer on your class that takes an int then the overload resolution question becomes which is better: conversion from zero to enum, or conversion from zero to int. Since both indexers are on the same type and the latter is exact, it wins.
It seems that because the enum
is int
-compatible that it prefers to use the implicit conversion from enum
to int
and chooses the indexer that takes an enum defined in your class.
(UPDATE: The real cause turned out to be that it is preferring the implicit conversion from the const int
of 0
to the enum
over the super-class int
indexer because both conversions are equal, so the former conversion is chosen since it is inside of the more derived type: MyCollection
.)
I'm not sure why it does this, when there's clearly a public indexer with an int
argument out there from Collection<T>
-- a good question for Eric Lippert if he's watching this as he'd have a very definitive answer.
I did verify, though, that if you re-define the int indexer in your new class as follows, it will work:
public class MyCollection : Collection<MyItem>
{
public new MyItem this[int index]
{
// make sure we get Collection<T>'s indexer instead.
get { return base[index]; }
}
}
From the spec it looks like the literal 0
can always be implicitly converted to an enum
:
13.1.3 Implicit enumeration conversions An implicit enumeration conversion permits the decimal-integer-literal 0 to be converted to any enum-type.
Thus, if you had called it as
int index = 0;
var item = col[index];
It would work because you are forcing it to choose the int indexer, or if you used a non-zero literal:
var item = col[1];
Console.WriteLine("1) Null ? {0}", item == null);
Would work since 1
cannot be implicitly converted to enum
It's still weird, i grant you considering the indexer from Collection<T>
should be just as visible. But I'd say it looks like it sees the enum
indexer in your subclass and knows that 0
can implicitly be converted to int
and satisfy it and doesn't go up the class-hierarchy chain.
This seems to be supported by section 7.4.2 Overload Resolution
in the specification, which states in part:
and methods in a base class are not candidates if any method in a derived class is applicable
Which leads me to believe that since the subclass indexer works, it doesn't even check the base class.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With