I calculate the first derivative using the following code:
def f(x):
f = np.exp(x)
return f
def dfdx(x):
Df = (f(x+h)-f(x-h)) / (2*h)
return Df
For example, for x == 10
this works fine. But when I set h
to around 10E-14
or below, Df
starts
to get values that are really far away from the expected value f(10)
and the relative error between the expected value and Df
becomes huge.
Why is that? What is happening here?
The evaluation of f(x)
has, at best, a rounding error of |f(x)|*mu
where mu
is the machine constant of the floating point type. The total error of the central difference formula is thus approximately
2*|f(x)|*mu/(2*h) + |f'''(x)|/6 * h^2
In the present case, the exponential function is equal to all of its derivatives, so that the error is proportional to
mu/h + h^2/6
which has a minimum at h = (3*mu)^(1/3)
, which for the double format with mu=1e-16
is around h=1e-5
.
The precision is increased if instead of 2*h
the actual difference (x+h)-(x-h)
between the evaluation points is used in the denominator. This can be seen in the following loglog plot of the distance to the exact derivative.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With