I need to compute the Frobenius norm in order to achieve this formula using the TensorFlow framework:
where w
is a matrix with 50 rows and 100 columns.
I tried to write something, but I don't understand how to fill out the axis
argument.
tf.pow(
tf.norm(x, ord='fro', axis=?), 2
)
According to the TensorFlow docs I have to use a 2-tuple (or a 2-list) because it determines the axies in tensor over which to compute a matrix norm, but I simply need a plain Frobenius norm. In SciPy
, for example, I can do it without specify any axis.
So, what should I use as axis
to emulate the SciPy
function?
So the Frobenius norm is a sum over a nxm
matrix, but tf.norm
allows to process several vectors and matrices in batch.
To better understand, imagine you have a rank 3 tensor:
t = [[[2], [4], [6]], [[8], [10], [12]], [[14], [16], [18]]]
It can be seen as several matrices aligned over one direction, but the function can't figure by itself which one. It could be either a batch of the following matrices:
[2, 4, 6] , [8 ,10, 12], [14, 16, 18]
or
[2 8 14], [4, 10, 16], [6, 12, 18]
So basically axis
tells which directions you want to consider when doing the summation in the Frobenius norm.
In your case, any of [1,2]
or [-2,-1]
would do the trick.
Independent of the number of dimensions of the tensor,
tf.sqrt(tf.reduce_sum(tf.square(w)))
should do the trick.
Negative indices are supported. Example: If you are passing a tensor that can be either a matrix or a batch of matrices at runtime, pass axis=[-2,-1] instead of axis=None to make sure that matrix norms are computed.
I just tested and [-2,-1] works.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With