I think I do not quite understand how F# infers types in sequence expressions and why types are not correctly recognized even if I specify the type of the elements directly from "seq".
In the following F# code we have a base class A and two derived classes, B and C:
type A(x) =
member a.X = x
type B(x) =
inherit A(x)
type C(x) =
inherit A(x)
If I try to "yield" their instances in a simple sequence expressions, I get two errors:
// Doesn't work, but it makes sense.
let testSeq = seq {
yield A(0)
yield B(1) // Error, expected type: A
yield C(2) // Error, expected type: A
}
That can make sense, since it may not be so trivial to infer "common" types (interfaces, I think, can make that work far harder). However, those errors can be fixed with a safe cast:
// Works fine :)
let testSeqWithCast = seq {
yield A(0)
yield B(1) :> A
yield C(2) :> A
}
What if I do not want to use casts? I tried to specify the sequence type directly from "seq", but things do not seem to work:
// Should work, I think...
let testGenSeq = seq<A> {
yield A(0)
yield B(1) // Error, expected type: A
yield C(2)
}
So, my question is: is there a way to avoid casts? If not, is there a reason why even specifying the type doesn't make the code work?
I tried digging through following links:
http://msdn.microsoft.com/en-us/library/dd233209.aspx http://lorgonblog.wordpress.com/2009/10/25/overview-of-type-inference-in-f/
But I found nothing useful...
Thank you in advance for any kind of answer you can give :)
What is Type Inference? - Definition from Techopedia What Does Type Inference Mean? What Does Type Inference Mean? Type inference is the automatic deduction of the data types of specific expressions in a programming language, usually done at compile time.
The inference algorithm tries to determine the argument types as well as the return value type and then it tries to find the most specific data type that works with all of the arguments.
Type inference refers to the process of determining the appropriate types for expressions based on how they are used. For example, in the expression f 3, OCaml knows that f must be a function, because it is applied to something (not because its name is f !) and that it takes an int as input. It knows nothing about the output type.
Pattern matching in OCaml is done by applying unification to OCaml expressions (e.g. Some x), whereas type inference is done by applying unification to type expressions (e.g. 'a -> 'b -> 'a). It is interesting that both these procedures turn out to be applications of the same general mechanism.
This is a good question, and the answer is probably more complicated than the responses you've gotten so far indicate. For instance, this does work:
let l : A list = [A(0); B(1); C(2)]
but this seemingly analogous code doesn't:
let s : A seq = seq { yield A(0); yield B(1); yield C(2) }
The reason is actually very subtle. The second case desugars to something which is basically a more complicated version of:
let s : A seq =
Seq.append (Seq.singleton (A(0)))
(Seq.append (Seq.singleton (B(1)))
(Seq.singleton (C(2)))))
So what's the problem? Ultimately, the problem is that Seq.singleton
has generic type 'x -> 'x seq
, but we want to pass a B
and get back an A seq
in the second call (by implicitly upcasting the instance). F# will implicitly upcast a function input of one concrete type to a concrete base type (e.g. if Seq.singleton
had signature A -> A seq
we could pass a B
!). Unfortunately, this doesn't happen with generic functions (generics, inheritance, and type inference don't play nicely together).
In order to understand the cause of your confusion you should not go anywhere further, than the first statement of the link you referred to :
A sequence is a logical series of elements all of one type.
You can return a sequence of only one, the same type like seq<A>
, or seq<obj>
. The OOP-ish fact that types B
and C
are inherited from A
is not relevant. The following may help: all your instances are also inherited from obj
, but in order to make from them a seq<obj>
you should explicitly cast:
// Works fine
let testSeq = seq<obj> {
yield A(0) :> obj
yield B(1) :> obj
yield C(2) :> obj
}
or just box
them like below:
// Works fine too
let testSeq = seq {
yield box (A(0))
yield box (B(1))
yield box (C(2))
}
EDIT: For understanding the reasoning behind explicit casting in F# the following (simplistic) consideration may help. Type inference does not do guessing; unless it can derive seq
type deterministically, or have it explicitly declared, it will complain.
If you just do
let testSeq = seq {
yield A(0)
yield B(1)
yield C(2)
}
compiler is presented with indeterminism - testSeq
can be either seq<A>
, or seq<obj>
, so it complains. When you do
let testSeq = seq {
yield A(0)
yield upcast B(1)
yield upcast C(2)
}
it infers testSeq
as seq<A>
based on type of the first member and upcasts B and C to A
without complaining. Similarly, if you do
let testSeq = seq {
yield box A(0)
yield upcast B(1)
yield upcast C(2)
}
it will infer testSeq
as seq<obj>
based on the type of the first member upcasting this time second and third members to obj
, not A
.
There is no implicit upcasting in F# check here. You can try inferred upcasting.
let testSeq : seq<A> = seq {
yield A(0)
yield upcast B(1)
yield upcast C(2)
}
Or if it is enough you can use discriminated unions:
type X =
| A of int
| B of int
| C of int
let testSeq = seq {
yield A 0
yield B 1
yield C 2
}
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With