Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

tf.nn.conv2d vs tf.layers.conv2d

Is there any advantage in using tf.nn.* over tf.layers.*?

Most of the examples in the doc use tf.nn.conv2d, for instance, but it is not clear why they do so.

like image 797
jul Avatar asked Mar 14 '17 11:03

jul


People also ask

What is nn Conv2D?

Applies a 2D convolution over an input signal composed of several input planes. where ⋆ is the valid 2D cross-correlation operator, N is a batch size, C denotes a number of channels, H is a height of input planes in pixels, and W is width in pixels.

What is Conv2D Tensorflow?

Keras Conv2D is a 2D Convolution Layer, this layer creates a convolution kernel that is wind with layers input which helps produce a tensor of outputs.

What are filters in Conv2D?

filters. Figure 1: The Keras Conv2D parameter, filters determines the number of kernels to convolve with the input volume. Each of these operations produces a 2D activation map. The first required Conv2D parameter is the number of filters that the convolutional layer will learn.


3 Answers

As GBY mentioned, they use the same implementation.

There is a slight difference in the parameters.

For tf.nn.conv2d:

filter: A Tensor. Must have the same type as input. A 4-D tensor of shape [filter_height, filter_width, in_channels, out_channels]

For tf.layers.conv2d:

filters: Integer, the dimensionality of the output space (i.e. the number of filters in the convolution).

I would use tf.nn.conv2d when loading a pretrained model (example code: https://github.com/ry/tensorflow-vgg16), and tf.layers.conv2d for a model trained from scratch.

like image 158
Mircea Avatar answered Nov 12 '22 23:11

Mircea


For convolution, they are the same. More precisely, tf.layers.conv2d (actually _Conv) uses tf.nn.convolution as the backend. You can follow the calling chain of: tf.layers.conv2d>Conv2D>Conv2D.apply()>_Conv>_Conv.apply()>_Layer.apply()>_Layer.\__call__()>_Conv.call()>nn.convolution()...

like image 29
GBY Avatar answered Nov 12 '22 23:11

GBY


As others mentioned the parameters are different especially the "filter(s)". tf.nn.conv2d takes a tensor as a filter, which means you can specify the weight decay (or maybe other properties) like the following in cifar10 code. (Whether you want/need to have weight decay in conv layer is another question.)

kernel = _variable_with_weight_decay('weights',
                                     shape=[5, 5, 3, 64],
                                     stddev=5e-2,
                                     wd=0.0)
conv = tf.nn.conv2d(images, kernel, [1, 1, 1, 1], padding='SAME')

I'm not quite sure how to set weight decay in tf.layers.conv2d since it only take an integer as filters. Maybe using kernel_constraint?

On the other hand, tf.layers.conv2d handles activation and bias automatically while you have to write additional codes for these if you use tf.nn.conv2d.

like image 13
EXP0 Avatar answered Nov 12 '22 22:11

EXP0