In trying to learn Haskell, I have implemented a pi calculation in order to understand functions and recursion properly.
Using the Leibniz Formula for calculating pi, I came up with the following, which prints pi to the tolerance of the given parameter, and the number of recursive function calls in order to get that value:
reverseSign :: (Fractional a, Ord a) => a -> a
reverseSign num = ((if num > 0
then -1
else 1) * (abs(num) + 2))
piCalc :: (Fractional a, Integral b, Ord a) => a -> (a, b)
piCalc tolerance = piCalc' 1 0.0 tolerance 0
piCalc' :: (Ord a, Fractional a, Integral b) => a -> a -> a -> b -> (a, b)
piCalc' denom prevPi tolerance count = if abs(newPi - prevPi) < tolerance
then (newPi, count)
else piCalc' (reverseSign denom) newPi tolerance (count + 1)
where newPi = prevPi + (4 / denom)
So when I run this in GHCI, it seems to work as expected:
*Main> piCalc 0.001
(3.1420924036835256,2000)
But if I set my tolerance too fine, this happens:
*Main> piCalc 0.0000001
(3.1415927035898146,*** Exception: stack overflow
This seems wholly counter-intuitive to me; the actual calculation works fine, but just trying to print how many recursive calls fails??
Why is this so?
The count isn't ever evaluated during the computation, so it's left as a huge amount of thunks (overflowing the stack) until the very end.
You can force its evaluation during the computation by enabling the BangPatterns
extension and writing piCalc' denom prevPi tolerance !count = ...
So why do we only need to force the evaluation of count
? Well, all the other arguments are evaluated in the if
. We actually need to inspect them all before calling piCalc'
again, so thunks aren't building up; we need the actual values, not just "promises that they can be computed"! count
, on the other hand, is never needed during the computation, and can remain as a series of thunks until the very end.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With