Now the max pooling function in tensorflow is
tf.nn.max_pool(value, ksize, strides, padding, name=None) Returns: A Tensor with type tf.float32. The max pooled output tensor.
I would like to have an extend version of max_pool, like
tf.nn.top_k_pool(value, ksize, strides, padding, k=1, name=None) Performs the top k pooling on the input. Args: value: A 4-D Tensor with shape [batch, height, width, channels] and type tf.float32. ksize: A list of ints that has length >= 4. The size of the window for each dimension of the input tensor. strides: A list of ints that has length >= 4. The stride of the sliding window for each dimension of the input tensor. padding: A string, either 'VALID' or 'SAME'. The padding algorithm. k: 0-D int32 Tensor. Number of top elements to look in each pool. name: Optional name for the operation. Returns: A Tensor with type tf.float32. The max pooled output tensor. There will be an additional dimension saving the top k values.
I know that I can expend the tensorflow operation following https://www.tensorflow.org/versions/r0.7/how_tos/adding_an_op/index.html
I would like to know if there is an easier way to achieve that.
Here is a function to using top_k
to take the max k activations of the channels. You can modify it to fit your purpose:
def make_sparse_layer(inp_x,k, batch_size=None):
in_shape = tf.shape(inp_x)
d = inp_x.get_shape().as_list()[-1]
matrix_in = tf.reshape(inp_x, [-1,d])
values, indices = tf.nn.top_k(matrix_in, k=k, sorted=False)
out = []
vals = tf.unpack(values, axis=0, num=batch_size)
inds = tf.unpack(indices, axis=0, num=batch_size)
for i, idx in enumerate(inds):
out.append(tf.sparse_tensor_to_dense(tf.SparseTensor(tf.reshape(tf.cast(idx,tf.int64),[-1,1]),vals[i], [d]), validate_indices=False ))
shaped_out = tf.reshape(tf.pack(out), in_shape)
return shaped_out
Your best bet is probably the TopK op: https://www.tensorflow.org/versions/r0.7/api_docs/python/nn.html#top_k
usd tf.reshape(), tf.matrix_transpose(), tf.nn.top_k(sorted=False), and the 'data_format' argument in tf.nn.conv2d(), see http://www.infocool.net/kb/OtherCloud/201703/318346.html for details
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With