Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

matrix determinant differentiation in tensorflow

I am interested in computing the derivative of a matrix determinant using TensorFlow. I can see from experimentation that TensorFlow has not implemented a method of differentiating through a determinant:

LookupError: No gradient defined for operation 'MatrixDeterminant' 
(op type: MatrixDeterminant)

A little further investigation revealed that it is actually possible to compute the derivative; see for example Jacobi's formula. I determined that in order to implement this means of differentiating through a determinant that I need to use the function decorator,

@tf.RegisterGradient("MatrixDeterminant")
def _sub_grad(op, grad):
    ...

However, I am not familiar enough with tensor flow to understand how this can be accomplished. Does anyone have any insight on this matter?

Here's an example where I run into this issue:

x = tf.Variable(tf.ones(shape=[1]))
y = tf.Variable(tf.ones(shape=[1]))

A = tf.reshape(
    tf.pack([tf.sin(x), tf.zeros([1, ]), tf.zeros([1, ]), tf.cos(y)]), (2,2)
)
loss = tf.square(tf.matrix_determinant(A))


optimizer = tf.train.GradientDescentOptimizer(0.001)
train = optimizer.minimize(loss)

init = tf.initialize_all_variables()
sess = tf.Session()
sess.run(init)


for step in xrange(100):
    sess.run(train)
    print sess.run(x)
like image 881
user1936768 Avatar asked Nov 14 '15 23:11

user1936768


1 Answers

Please check "Implement Gradient in Python" section here

In particular, you can implement it as follows

@ops.RegisterGradient("MatrixDeterminant")
def _MatrixDeterminantGrad(op, grad):
  """Gradient for MatrixDeterminant. Use formula from 2.2.4 from
  An extended collection of matrix derivative results for forward and reverse
  mode algorithmic differentiation by Mike Giles
  -- http://eprints.maths.ox.ac.uk/1079/1/NA-08-01.pdf
"""
  A = op.inputs[0]
  C = op.outputs[0]
  Ainv = tf.matrix_inverse(A)
  return grad*C*tf.transpose(Ainv)

Then a simple training loop to check that it works:

a0 = np.array([[1,2],[3,4]]).astype(np.float32)
a = tf.Variable(a0)
b = tf.square(tf.matrix_determinant(a))
init_op = tf.initialize_all_variables()
sess = tf.InteractiveSession()
init_op.run()

minimization_steps = 50
learning_rate = 0.001
optimizer = tf.train.GradientDescentOptimizer(learning_rate)
train_op = optimizer.minimize(b)

losses = []
for i in range(minimization_steps):
  train_op.run()
  losses.append(b.eval())

Then you can visualize your loss over time

import matplotlib.pyplot as plt

plt.ylabel("Determinant Squared")
plt.xlabel("Iterations")
plt.plot(losses)

Should see something like this Loss plot

like image 200
Yaroslav Bulatov Avatar answered Oct 14 '22 08:10

Yaroslav Bulatov