I am writing a small code to calculate the fourth derivative using the method of finite differences in tensorflow. This is as follows:
def action(y,x):
#spacing between points.
h = (x[-1] - x[0]) / (int(x.shape[0]) - 1)
#fourth derivative
dy4 = (y[4:] - 4*y[3:-1] + 6*y[2:-2] - 4*y[1:-3] + y[:-4])/(h*h*h*h)
return dy4
x = tf.linspace(0.0, 30, 1000)
y = tf.tanh(x)
dy4 = action(y,x)
sess = tf.compat.v1.Session()
plt.plot(sess.run(dy4))
This results in the following graph:
However if I use essentially the same code but just using numpy, the results are much cleaner:
def fourth_deriv(y, x):
h = (x[-1] - x[0]) / (int(x.shape[0]) - 1)
dy = (y[4:] - 4*y[3:-1] + 6*y[2:-2] - 4*y[1:-3] + y[:-4])/(h*h*h*h)
return dy
x = np.linspace(0.0, 30, 1000)
test = fourth_deriv(np.tanh(x), x)
plt.plot(test)
Which gives:
What is the issue here? I was thinking at first that the separation between points could be too small to give an accurate computation, but clearly, that's not the case if numpy can handle it fine.
Tensorflow calculates derivatives using automatic differentiation. This is different from symbolic differentiation and numeric differentiation (aka finite differences). More than a smart math approach, it is a smart programming approach.
To differentiate automatically, TensorFlow needs to remember what operations happen in what order during the forward pass. Then, during the backward pass, TensorFlow traverses this list of operations in reverse order to compute gradients.
gradients() adds ops to the graph to output the derivatives of ys with respect to xs . It returns a list of Tensor of length len(xs) where each tensor is the sum(dy/dx) for y in ys and for x in xs . grad_ys is a list of tensors of the same length as ys that holds the initial gradients for each y in ys .
The issue is related to the choice of floating-point types.
tf.linspace
automatically selects tf.float32
as its type, whilenp.linspace
creates a float64
array, which has much more precision.Making the following modification:
start = tf.constant(0.0, dtype = tf.float64)
end = tf.constant(30.0, dtype = tf.float64)
x = tf.linspace(start, end, 1000)
causes a smooth plot to appear:
It's worth noting further that Tensorflow does include an automatic differentiation, which is crucial for machine learning training and is hence well-tested - you can use gradient tapes to access it and evaluate a fourth derivative without the imprecision of numeric differentiation using finite differences:
with tf.compat.v1.Session() as sess2:
x = tf.Variable(tf.linspace(0, 30, 1000))
sess2.run(tf.compat.v1.initialize_all_variables())
with tf.GradientTape() as t4:
with tf.GradientTape() as t3:
with tf.GradientTape() as t2:
with tf.GradientTape() as t1:
y = tf.tanh(x)
der1 = t1.gradient(y, x)
der2 = t2.gradient(der1, x)
der3 = t3.gradient(der2, x)
der4 = t4.gradient(der3, x)
print(der4)
plt.plot(sess2.run(der4))
The accuracy of this method is far better than can be achieved using finite difference methods. The following code compares the accuracy of auto diff with the accuracy of the finite difference method:
x = np.linspace(0.0, 30, 1000)
sech = 1/np.cosh(x)
theoretical = 16*np.tanh(x) * np.power(sech, 4) - 8*np.power(np.tanh(x), 3)*np.power(sech,2)
finite_diff_err = theoretical[2:-2] - from_finite_diff
autodiff_err = theoretical[2:-2] - from_autodiff[2:-2]
print('Max err with autodiff: %s' % np.max(np.abs(autodiff_err)))
print('Max err with finite difference: %s' % np.max(np.abs(finite_diff_err)))
line, = plt.plot(np.log10(np.abs(autodiff_err)))
line.set_label('Autodiff log error')
line2, = plt.plot(np.log10(np.abs(finite_diff_err)))
line2.set_label('Finite difference log error')
plt.legend()
and yields the following output:
Max err with autodiff: 3.1086244689504383e-15
Max err with a finite difference: 0.007830900165363808
and the following plot (the two lines overlap after around 600 on the X-axis):
Your problem is that the default precision in Tensorflow is 32 bits, but the default precision in numpy is 64 bits. A simple solution is to replace your linspace as follows:
x = tf.linspace(tf.constant(0.0, dtype='float64'),
tf.constant(30, dtype='float64'), 1000)
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With