I was reading a Haskell tutorial (Learn You a Haskell) in which the author said that laziness goes well with referential transparency. After more reading and some searching, I still don't understand why. Note that I do understand what's great about referential transparency and laziness, but it's them together that's troubling me.
Is there any special benefit of the combination of the two?
Or maybe the author just wanted to say they're both nice to have and expressed that ambiguously?
It's really easy. A non-strict (e.g. lazy) evaluation means that tasks can be postponed. But in order to postpone something, you're better sure that you get later that same result as you would get now, and that's referential transparency. Consider this imperative Java code:
long start = System.currentTimeMillis(); //get the start time
runBenchmarkFunction();
System.out.println("Run took " + (System.currentTimeMillis() - start) + " ms");
Now a lazy language would postpone the evalutation of the first line, because start is only needed in the third line. So the result would be 0 (or very close to it). Probably that isn't what you want. The reason for that trouble would be that System.currentTimeMillis is not referential transparent. You wouldn't have any problem in that case if it were a function in the "mathematical sense" like sin or ln, which are referential transparent.
Referential transparency means that the function will always return the same output given the same input. So if does not matter if the function is lazy or strict. The lazy function will compute its output at some unknown time in future but due to referential transparency you are guaranteed that the output will be same always for the given inputs.
So in a way, referential transparency assures the correctness of lazy functions.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With