Apparently it is possible to implement Haskell such that it evaluates eagerly without changing the semantics of the language at all. If that is true, how are infinite data structures handled?
http://csg.csail.mit.edu/pubs/haskell.html
Therefore, a great deal of time is spent creating and destroying suspended pieces of computation (thunks). All too often, these computations are simple enough that it would be just as easy to evaluate them instead. Faxen and others have used static analysis to expose such opportunities for eagerness. We instead propose using eagerness everywhere, while using mechanisms which permit us to recover if our program is too eager.
The key thing there being "we have mechanisms to recover if our program is too eager". What are these mechanism? How do they permit for infinite data structures and the other aspects of lazy evaluation that I've been led to believe are impossible in an eager language?
That's not quite true: you can evaluate Haskell terms eagerly if you can prove that they won't diverge.
For instance, when you see f x
, you could first execute up to 1,000 steps of x (stopping if you reach WHNF (weak head normal form) or an error), and then pass it to f — the semantics are preserved, but if you think x is going to be evaluated, then you can arrange for this to happen ahead of time as an optimisation.
If x = fix id
, it'll just give up after 1,000 steps of not going anywhere. If x = undefined
, it'll cause an error and give up (restoring the original thunk and passing it to f). And so on.
If x = [1..]
, then it might end up reducing it to 1 : 2 : 3 : ... : 999 : 1000 : [1001..]
, reach the limit, and stop there, passing the result to f.
Basically: By either proving that an expression cannot diverge, or bounding the evaluation so that things terminate even if it does. No change to the semantics, and possibly a big improvement to performance.
Of course, the downside is that if x turns out to be some really expensive computation that f only uses half of, you'll spend 1,000 reduction steps wasting time. And in the case of [1..]
, you could end up using a lot of memory to store a list; if f processes the list one element at a time, throwing away the head each time, then you'll waste memory. That's why it's tricky :)
The page you originally linked goes into more detail as to the specific techniques used.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With