I just want to implement a function that given a matrix X returns the covariance matrix of X (X^T*X), which is just a simple matrix multiplication.
In Tensorflow it's gonna be easy: tf.matmul(X, tf.transpose(X))
But I didn't expect that it's a nightmare with Keras. The APIs in Keras like multiply and dot don't fit my request. I also tried different ways (Lambda layer and mixed with TF operations) but still failed, occurred lots of errors.
Hope someone may help. Thanks.
Actually you do have the analogous in Keras. Try dot(x, transpose(x))
.
A working example comparing the two platforms follows.
import keras.backend as K
import numpy as np
import tensorflow as tf
def cov_tf(x_val):
x = tf.constant(x_val)
cov = tf.matmul(x, tf.transpose(x))
return cov.eval(session=tf.Session())
def cov_keras(x_val):
x = K.constant(x_val)
cov = K.dot(x, K.transpose(x))
return cov.eval(session=tf.Session())
if __name__ == '__main__':
x = np.random.rand(4, 5)
delta = np.abs(cov_tf(x) - cov_keras(x)).max()
print('Maximum absolute difference:', delta)
The maximum absolute difference is printed and gives me something around 1e-7
.
You must have a layer, and inside the layer make the calculation.
import keras.backend as K
from keras.layers import Lambda
from keras.models import Model
inp = Input((your input shape))
previousLayerOutput = SomeLayerBeforeTheCovariance(blabla)(inp)
covar = Lambda(lambda x: K.dot(K.transpose(x),x),
output_shape = (your known shape of x))(previousLayerOutput)
nextOut = SomeOtherLayerAfterThat(blablabla)(covar)
lastOut = AnotherLayer(bahblanba)(nextOut)
model = Model(inp, lastOut)
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With