Haskell has a magical function named seq
, which takes an argument of any type and reduces it to Weak Head Normal Form (WHNF).
I've read a couple of sources [not that I can remember who they were now...] which claim that "polymorphic seq
is bad". In what way are they "bad"?
Similarly, there is the rnf
function, which reduces an argument to Normal Form (NF). But this is a class method; it does not work for arbitrary types. It seems "obvious" to me that one could alter the language spec to provide this as a built-in primitive, similar to seq
. This, presumably, would be "even more bad" than just having seq
. In what way is this so?
Finally, somebody suggested that giving seq
, rnf
, par
and similars the same type as the id
function, rather than the const
function as it is now, would be an improvement. How so?
As far as I know a polymorphic seq
function is bad because it weakens free theorems or, in other words, some equalities that are valid without seq
are no longer valid with seq
. For example, the equality
map g (f xs) = f (map g xs)
holds for all functions g :: tau -> tau'
, all lists xs :: [tau]
and all polymorphic functions f :: [a] -> [a]
. Basically, this equality states that f
can only reorder the elements of its argument list or drop or duplicate elements but cannot invent new elements.
To be honest, it can invent elements as it could "insert" a non-terminating computation/run-time error into the lists, as the type of an error is polymorphic. That is, this equality already breaks in a programming language like Haskell without seq
. The following function definitions provide a counter example to the equation. Basically, on the left hand side g
"hides" the error.
g _ = True f _ = [undefined]
In order to fix the equation, g
has to be strict, that is, it has to map an error to an error. In this case, the equality holds again.
If you add a polymorphic seq
operator, the equation breaks again, for example, the following instantiation is a counter example.
g True = True f (x:y:_) = [seq x y]
If we consider the list xs = [False, True]
, we have
map g (f [False, True]) = map g [True] = [True]
but, on the other hand
f (map g [False, True]) = f [undefined, True] = [undefined]
That is, you can use seq
to make the element of a certain position of the list depend on the definedness of another element in the list. The equality holds again if g
is total. If you are intereseted in free theorems check out the free theorem generator, which allows you to specify whether you are considering a language with errors or even a language with seq
. Although, this might seem to be of less practical relevance, seq
breaks some transformations that are used to improve the performence of functional programs, for example, foldr
/build
fusion fails in the presence of seq
. If you are intereseted in more details about free theorems in the presence of seq
, take a look into Free Theorems in the Presence of seq.
As far as I know it had been known that a polymorphic seq
breaks certain transformations, when it was added to the language. However, the althernatives have disadvantages as well. If you add a type class based seq
, you might have to add lots of type class constraints to your program, if you add a seq
somewhere deep down. Furthermore, it had not been a choice to omit seq
as it had already been known that there are space leaks that can be fixed using seq
.
Finally, I might miss something, but I don't see how a seq
operator of type a -> a
would work. The clue of seq
is that it evaluates an expression to head normal form, if another expression is evaluated to head normal form. If seq
has type a -> a
there is no way of making the evaluation of one expression depend on the evaluation of another expression.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With