import numpy as np
import tensorflow as tf
X_node = tf.placeholder('float',[1,10,1])
filter_tf = tf.Variable( tf.truncated_normal([3,1,1],stddev=0.1) )
Xconv_tf_tensor = tf.nn.conv1d(X_node, filter_tf,1,'SAME')
X = np.random.normal(0,1,[1,10,1])
with tf.Session() as sess:
tf.global_variables_initializer().run()
feed_dict = {X_node: X}
filter_np = filter_tf.eval()
Xconv_tf = sess.run(Xconv_tf_tensor,feed_dict)
Xconv_np = np.convolve(X[0,:,0],filter_np[:,0,0],'SAME')
I am trying to see the results of convolutions from Tensorflow to check if it is behaving as I intended.
When I run the numpy convolution and compare it to the Tensorflow convolution, the answer is different.
The above code is how I ran the test.
I was hoping the Xconv_tf
and Xconv_np
would be equal.
My final goal is the run 2D convolution on a matrix with a 1 dimensional filter that runs 1d-convolution on each row with the same filter. In order to make this work (which will be basically a loop of 1d convolution over the rows) I need to figure out why my np.convolve
and tf.conv1d
give me different answers.
They are simply a technique used in image processing. In image processing, a convolution operation is the process of summing each element of the input image with its local neighbors, weighted by the kernel. ¹ The output size will then depend on the following: Image by author, inspired by source.
numpy. convolve(a, v, mode='full')[source] Returns the discrete, linear convolution of two one-dimensional sequences. The convolution operator is often seen in signal processing, where it models the effect of a linear time-invariant system on a signal [1].
The problem that you see is because TF does not really calculate the convolution. If you will take a look at the explanation of what convolution actually does (check for Visual explanations of convolution), you will see that the second function is flipped:
TF does everything except of that flip. So all you need to do is to flip the kernel either in TF or in numpy. Flipping for 1d case is just kernel in a reverse order, for 2d you will need to flip both axis (rotate the kernel 2 times).
import tensorflow as tf
import numpy as np
I = [1, 0, 2, 3, 0, 1, 1]
K = [2, 1, 3]
i = tf.constant(I, dtype=tf.float32, name='i')
k = tf.constant(K, dtype=tf.float32, name='k')
data = tf.reshape(i, [1, int(i.shape[0]), 1], name='data')
kernel = tf.reshape(k, [int(k.shape[0]), 1, 1], name='kernel')
res = tf.squeeze(tf.nn.conv1d(data, kernel, 1, 'VALID'))
with tf.Session() as sess:
print sess.run(res)
print np.convolve(I, K[::-1], 'VALID')
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With