I want to calculate Jacobian matrix by Tensorflow.
What I have:
def compute_grads(fn, vars, data_num):
grads = []
for n in range(0, data_num):
for v in vars:
grads.append(tf.gradients(tf.slice(fn, [n, 0], [1, 1]), v)[0])
return tf.reshape(tf.stack(grads), shape=[data_num, -1])
fn
is a loss function, vars
are all trainable variables, and data_num
is a number of data.
But if we increase the number of data, it takes tremendous time to run the function compute_grads
.
Any ideas?
Assuming that X
and Y
are Tensorflow tensors and that Y
depends on X
:
from tensorflow.python.ops.parallel_for.gradients import jacobian
J=jacobian(Y,X)
The result has the shape Y.shape + X.shape
and provides the partial derivative of each element of Y
with respect to each element of X
.
Assuming you are using Tensorflow 2 or Tensorflow <2 and Eager mode, you can use the GradientTape and the inbuild function:
with tf.GradientTape() as g:
x = tf.constant([1.0, 2.0])
g.watch(x)
y = x * x
jacobian = g.jacobian(y, x)
# jacobian value is [[2., 0.], [0., 4.]]
Check the official documentation for more
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With