I am new in TensorFlow. I am trying to implement the global_context extraction in this paper https://arxiv.org/abs/1506.04579, which is actually an average pooling over the whole feature map, then duplicate the 1x1 feature map back to the original size. The illustration is as below
Specifically, the expected operation is following. input: [N, 1, 1, C] tensor, where N is the batch size and C is the number of channel output: [N, H, W, C] tensor, where H, W is the hight and width of original feature map, and all the H * W values of output are the same as the 1x1 input.
For example,
[[1, 1, 1]
1 -> [1, 1, 1]
[1, 1, 1]]
I have no idea how to do this using TensorFlow. tf.image.resize_images requires 3 channels, and tf.pad cannot pad constant value other than zero.
tf.tile may help you
x = tf.constant([[1, 2, 3]]) # shape (1, 3)
y = tf.tile(x, [3, 1]) # shape (3, 3)
y_ = tf.tile(x, [3, 2]) # shape (3, 6)
with tf.Session() as sess:
a, b, c = sess.run([x, y, y_])
>>>a
array([[1, 2, 3]], dtype=int32)
>>>b
array([[1, 2, 3],
[1, 2, 3],
[1, 2, 3]], dtype=int32)
>>>c
array([[1, 2, 3, 1, 2, 3],
[1, 2, 3, 1, 2, 3],
[1, 2, 3, 1, 2, 3]], dtype=int32)
tf.tile(input, multiples, name=None)
multiples
means how many times you want to repeat in this axis
in y
repeat axis0 3 times
in y_
repeat axis0 3 times, and axis1 2 times
you may need to use tf.expand_dim
first
yes it accept dynamic shape
x = tf.placeholder(dtype=tf.float32, shape=[None, 4])
x_shape = tf.shape(x)
y = tf.tile(x, [3 * x_shape[0], 1])
with tf.Session() as sess:
x_ = np.array([[1, 2, 3, 4]])
a = sess.run(y, feed_dict={x:x_})
>>>a
array([[ 1., 2., 3., 4.],
[ 1., 2., 3., 4.],
[ 1., 2., 3., 4.]], dtype=float32)
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With