Let's say I have an array of integers, and I want to get the sum of all the even numbers and the sum of all the odd numbers. For example, for the array [1,2,3]
, sum of all the odd numbers is 4, and sum of all the even numbers is 2.
This is how I do it:
array.reduce((odd: 0, even: 0), { (result, int) in
if int % 2 == 0 {
return (result.odd, result.even + int)
} else {
return (result.odd + int, result.even)
}
})
This worked fine on its own, but as soon as I try to deconstruct the tuple returned:
let (oddSum, evenSum) = a.reduce((odd: 0, even: 0), { (result, int) in
if int % 2 == 0 {
return (result.odd, result.even + int)
} else {
return (result.odd + int, result.even)
}
})
It gives me the error:
Value of tuple type '(Int, Int)' has no member 'odd'
on the return
statements.
Why does deconstructing the tuple cause the generic type to be inferred differently? The deconstruction part should just say what to do to the result. The method call should have been interpreted on its own, and then matched against the pattern (oddSum, evenSum)
.
To fix this I have to change the first parameter to (0, 0)
, which makes the stuff in the closure very unreadable. I have to refer to the odd sum as result.0
and even sum as result.1
.
This behaviour is unfortunate, but is 'working as expected' due to a combination of:
Why does deconstructing the tuple cause the generic type to be inferred differently? The deconstruction part should just say what to do to the result. The method call should have been interpreted on its own, and then matched against the pattern
(evenSum, oddSum)
.
The type checker does bidirectional type inference, meaning that the pattern used can influence how the assigned expression is type checked. For example, consider:
func magic<T>() -> T {
fatalError()
}
let x: Int = magic() // T == Int
The type of the pattern is used to infer that T
is Int
.
So what happens with a tuple deconstruction pattern?
let (x, y) = magic() // error: Generic parameter 'T' could not be inferred
The type checker creates two type variables to represent each element of the tuple pattern. Such type variables are used internally within the constraint solver, and each must be bound to a Swift type before the constraint system can be considered solved. Within the constraint system, the pattern let (x, y)
has the type ($T0, $T1)
, where $T{N}
is a type variable.
The function returns the generic placeholder T
, so the constraint system deduces that T
is convertible to ($T0, $T1)
. However there's no further information for what $T0
and $T1
can be bound to, so the system fails.
Okay, let's give the system a way to bind types to those type variables by adding a parameter to the function.
func magic<T>(_ x: T) -> T {
print(T.self)
fatalError()
}
let labelledTuple: (x: Int, y: Int) = (x: 0, y: 0)
let (x, y) = magic(labelledTuple) // T == (Int, Int)
This now compiles, and we can see that the generic placeholder T
is inferred to be (Int, Int)
. How did this happen?
magic
is of type (T) -> T
.(x: Int, y: Int)
.($T0, $T1)
.Here we can see that the constraint system has two options, it can either:
T
to the un-labelled tuple type ($T0, $T1)
, forcing the argument of type (x: Int, y: Int)
to perform a tuple conversion that strips it of its labels.T
to the labelled tuple type (x: Int, y: Int)
, forcing the returned value to perform a tuple conversion that strips it of its labels such that it can be converted to ($T0, $T1)
.(this is glossing over the fact that generic placeholders are opened into fresh type variables, but that's an unnecessary detail here)
Without any rule to favour one option over the other, this is ambiguous. Luckily however the constraint system has a rule to prefer the un-labelled version of a tuple type when binding a type. Therefore the constraint system decides to bind T
to ($T0, $T1)
, at which point both $T0
and $T1
can be bound to Int
due to the fact that (x: Int, y: Int)
needs to be convertible to ($T0, $T1)
.
Let's see what happens when we remove the tuple deconstruction pattern:
func magic<T>(_ x: T) -> T {
print(T.self)
fatalError()
}
let labelledTuple: (x: Int, y: Int) = (x: 0, y: 0)
let tuple = magic(labelledTuple) // T == (x: Int, y: Int)
T
now gets bound to (x: Int, y: Int)
. Why? Because the pattern type is now simply of type $T0
.
T
gets bound to $T0
, then $T0
will be bound to the argument type (x: Int, y: Int)
.T
gets bound to (x: Int, y: Int)
, then $T0
will also get bound to (x: Int, y: Int)
.In both cases, the solution is the same, so no ambiguity. There's no possibility of T
getting bound to an un-labelled tuple type simply due to the fact that no un-labelled tuple type is introduced into the system in the first place.
So, how does this apply to your example? Well, magic
is just reduce
without the additional closure argument:
public func reduce<Result>(
_ initialResult: Result,
_ nextPartialResult: (_ partialResult: Result, Element) throws -> Result
) rethrows -> Result
When you do:
let (oddSum, evenSum) = a.reduce((odd: 0, even: 0), { (result, int) in
if int % 2 == 0 {
return (result.odd, result.even + int)
} else {
return (result.odd + int, result.even)
}
})
If we ignore the closure for now, we have the same choice of bindings for Result
:
Result
to the un-labelled tuple type ($T0, $T1)
, forcing the argument of type (odd: Int, even: Int)
to perform a tuple conversion that strips it of its labels.Result
to the labelled tuple type (odd: Int, even: Int)
, forcing the returned value to perform a tuple conversion that strips it of its labels such that it can be converted to ($T0, $T1)
.And because of the rule to favour the un-labelled form of tuple, the constraint solver chooses to bind Result
to ($T0, $T1)
, which gets resolved to (Int, Int)
. Removing the tuple decomposition works because you no longer introduce the type ($T0, $T1)
into the constraint system – meaning that Result
can only be bound to (odd: Int, even: Int)
.
Okay, but let's consider the closure again. We're clearly accessing the members .odd
and .even
on the tuple, so why can't the constraint system figure out that the binding of Result
to (Int, Int)
isn't viable? Well, this is due to the fact that multiple statement closures don't participate in type inference. This means that the closure body is solved independently to the call to reduce
, so by the time the constraint system realises that the binding (Int, Int)
is invalid, it's too late.
If you reduce the closure down to a single statement, this restriction is lifted and the constraint system can correctly discount (Int, Int)
as a valid binding for Result
:
let (oddSum, evenSum) = a.reduce((odd: 0, even: 0), { (result, int) in
return int % 2 == 0 ? (result.odd, result.even + int)
: (result.odd + int, result.even)
})
Or if you change the pattern to use the corresponding tuple labels, as pointed out by Martin, the type of the pattern is now (odd: $T0, even: $T1)
, which avoids the introduction of the un-labelled form into the constraint system:
let (odd: oddSum, even: evenSum) = a.reduce((odd: 0, even: 0), { (result, int) in
if int % 2 == 0 {
return (result.odd, result.even + int)
} else {
return (result.odd + int, result.even)
}
})
Another option, as pointed out by Alladinian, is to explicitly annotate the closure parameter type:
let (oddSum, evenSum) = a.reduce((odd: 0, even: 0), { (result: (odd: Int, even: Int), int) in
if int % 2 == 0 {
return (result.odd, result.even + int)
} else {
return (result.odd + int, result.even)
}
})
Note however that unlike the previous two examples, this causes Result
to be bound to (Int, Int)
due to the pattern introducing the preferred type ($T0, $T1)
. What allows this example to compile is that fact that the compiler inserts a tuple conversion for the passed closure, which re-adds the tuple labels for its parameter.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With