I was learning functional programming in Swift through Haskell and came across this interesting concept of Transducers. One code example implemented a mapping
function that basically spits out a Transducer, given that we pass the transformation rule as a parameter.
Inspired, I quickly translated this to a Swift equivalent and this is what I got:
func mapping < A, B, C> (f: A -> B) -> ( ( (C,B) -> C) -> ( (C,A) -> C) ) {
return { r in
return { result, a in
return r(result,f(a))
}
}
}
My question now is that notice how the transformation function goes from A to B ( (A -> B)
), but the transducer goes from B to A ( ( (C,B) -> C) -> ( (C,A) -> C)
)? Why is that? I'm sure that this is not coincidental, because that order matters. Haskell experts, anyone?
The order is inverted because your mapping
is acting on the argument. You are using an A -> B
function to make a (C,A)
pair fit the type of a (C,B) -> C
function. So what you get in the end is a (C,A) -> C
function which converts the A
in the pair to a B
as a preliminary step.
On a more general note, mapping
provides an example of a contravariant functor, as opposed to covariant functors (e.g. Haskell's Functor
class) in which the order is not inverted. For the sake of illustration, here is a relevant Haskell package.
Look at where the call to f
actually takes place. It's transforming an A
to a B
, and the result is used as an argument to r
. Therefore r
must be expecting a B
; we're using an A -> B
function to preprocess the input to a (C, B) -> C
function, resulting in a (C, A) -> C
function.
In general this reversal happens whenever we're transforming a system to change its "input". If we're transforming a system to change its "output", there is no reversal1.
X -> A ---> A -> B ---> B -> Y
If I have an A -> B
function and I want to make from it something that emits Y
s instead, I need to map a over it's output B -> Y
function. This is known as covariance, because the thing I want to change (the B
in A -> B
) "varies with" the function I map over it (B -> Y
). The B
is said to be in a positive position in A -> B
.
If I have an A -> B
function and I want to make from it something that takes X
s instead, I need to map a X -> A
function over its input. This is known as contravariance, because the thing I want to change (the A
in A -> B
) "varies against" the function I map over it (X -> A
). The A
is said to be in a negative position in A -> B
.
1 Higher order programming means that we can be transforming a system to change an input within the output of something that's an input to an outpu "of our system...! The terms "negative position" and "positive position" help hint that the negative of a negative is a positive, etc.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With