Suppose I write in GHCi:
GHCi> let x = 1 + 2 :: Integer
GHCi> seq x ()
GHCi> :sprint x
GHCi prints x = 3
as naturally expected.
However,
GHCi> let x = 1 + 2
GHCi> seq x ()
GHCi> :sprint x
yields x = _
The sole difference between the two expressions are their types (Integer
vs Num a => a
). My question is what exactly happens, and why is seemingly x
not evaluated in the latter example.
The main issue is that
let x = 1 + 2
defines a polymorphic value of type forall a. Num a => a
, and that is something which evaluates similarly to a function.
Each use of x
can be made at a different type, e.g. x :: Int
, x :: Integer
, x :: Double
and so on. These results are not "cached" in any way, but recomputed every time, as if x
were a function which is called multiple times, so to speak.
Indeed, a common implementation of type classes implements such a polymorphic x
as a function
x :: NumDict a -> a
where the NumDict a
argument above is added by the compiler automatically, and carries information about a
being a Num
type, including how to perform addition, how to interpret integer literals inside a
, and so on. This is called the "dictionary-passing" implementation.
So, using a polymorphic x
multiple times indeed corresponds to invoking a function multiple times, causing recomputation. To avoid this, the (dreaded) Monomorphism Restriction was introduced in Haskell, forcing x
to be monomorphic instead. The MR is not a perfect solution, and can create some surprising type errors in certain cases.
To alleviate this issue, the MR is disabled by default in GHCi, since in GHCi we don't care that much about performance -- usability is more important there. This however causes the recomputation to reappear, as you discovered.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With