I've been trying to learn Haskell by building short programs. I'm somewhat new to the functional programming world but have already done a good amount of reading.
I have a relatively short recursive function in Haskell for using Newton's method to find roots of a function up to the precision allowed by floating point numbers:
newtonsMethod :: (Ord a, Num a, Fractional a) => (a -> a) -> (a -> a) -> a -> a
newtonsMethod f f' x
| f x < epsilon = x
| otherwise =
newtonsMethod f f' (x - (f x / f' x))
where
epsilon = last . map (subtract 1) . takeWhile (/= 1)
. map (+ 1) . iterate (/2) $ 1
When I interpret in GHCi and plug in newtonsMethod (\ x -> cos x + 0.2) (\ x -> -1 * sin x) (-1)
, I get -1.8797716370899549
, which is the first iteration of Newton's method for the values called.
My first question is straightforward: why does it only recurse once? Please also let me know if you see any potential improvements to the way this code is structured or flagrant mistakes.
My second question, a little more involved, is this: is there some clean way to test parent calls of this function, see if it's failing to converge, and bail out accordingly?
Thanks in advance for any answer you can give!
It runs only once because -1.8...
is less than epsilon
, a strictly positive quantity. You want to check to see if the absolute value of the difference is within tolerance.
One way to get convergence diagnostics for this kind of code is to generate your results as a lazy list, not unlike how you found epsilon
using iterate
. That means that you can get your final result by traversing the list, but you can also see it in the context of the results that lead up to it.
I couldn't help re-writing it co-recursively and to use automatic differentiation. Of course one should really use the AD package: http://hackage.haskell.org/package/ad. Then you don't have to calculate the derivative yourself and you can see the method converge.
data Dual = Dual Double Double
deriving (Eq, Ord, Show)
constD :: Double -> Dual
constD x = Dual x 0
idD :: Double -> Dual
idD x = Dual x 1.0
instance Num Dual where
fromInteger n = constD $ fromInteger n
(Dual x x') + (Dual y y') = Dual (x + y) (x' + y')
(Dual x x') * (Dual y y') = Dual (x * y) (x * y' + y * x')
negate (Dual x x') = Dual (negate x) (negate x')
signum _ = undefined
abs _ = undefined
instance Fractional Dual where
fromRational p = constD $ fromRational p
recip (Dual x x') = Dual (1.0 / x) (-x' / (x * x))
instance Floating Dual where
pi = constD pi
exp (Dual x x') = Dual (exp x) (x' * exp x)
log (Dual x x') = Dual (log x) (x' / x)
sqrt (Dual x x') = Dual (sqrt x) (x' / (2 * sqrt x))
sin (Dual x x') = Dual (sin x) (x' * cos x)
cos (Dual x x') = Dual (cos x) (x' * (- sin x))
sinh (Dual x x') = Dual (sinh x) (x' * cosh x)
cosh (Dual x x') = Dual (cosh x) (x' * sinh x)
asin (Dual x x') = Dual (asin x) (x' / sqrt (1 - x*x))
acos (Dual x x') = Dual (acos x) (x' / (-sqrt (1 - x*x)))
atan (Dual x x') = Dual (atan x) (x' / (1 + x*x))
asinh (Dual x x') = Dual (asinh x) (x' / sqrt (1 + x*x))
acosh (Dual x x') = Dual (acosh x) (x' / (sqrt (x*x - 1)))
atanh (Dual x x') = Dual (atanh x) (x' / (1 - x*x))
newtonsMethod' :: (Dual -> Dual) -> Double -> [Double]
newtonsMethod' f x = zs
where
zs = x : map g zs
g y = y - a / b
where
Dual a b = f $ idD y
epsilon :: (Eq a, Fractional a) => a
epsilon = last . map (subtract 1) . takeWhile (/= 1)
. map (+ 1) . iterate (/2) $ 1
This gives the following
*Main> take 10 $ newtonsMethod' (\x -> cos x + 0.2) (-1)
[-1.0,
-1.8797716370899549,
-1.770515242616871,
-1.7721539749707398,
-1.7721542475852199,
-1.7721542475852274,
-1.7721542475852274,
-1.7721542475852274,
-1.7721542475852274,
-1.7721542475852274]
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With