Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Training a fully convolutional neural network with inputs of variable size takes unreasonably long time in Keras/TensorFlow

I am trying to implement a FCNN for image classification that can accept inputs of variable size. The model is built in Keras with TensorFlow backend.

Consider the following toy example:

model = Sequential()

# width and height are None because we want to process images of variable size 
# nb_channels is either 1 (grayscale) or 3 (rgb)
model.add(Convolution2D(32, 3, 3, input_shape=(nb_channels, None, None), border_mode='same'))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))

model.add(Convolution2D(32, 3, 3, border_mode='same'))
model.add(MaxPooling2D(pool_size=(2, 2)))

model.add(Convolution2D(16, 1, 1))
model.add(Activation('relu'))

model.add(Convolution2D(8, 1, 1))
model.add(Activation('relu'))

# reduce the number of dimensions to the number of classes
model.add(Convolution2D(nb_classses, 1, 1))
model.add(Activation('relu'))

# do global pooling to yield one value per class
model.add(GlobalAveragePooling2D())

model.add(Activation('softmax'))

This model runs fine but I am running into a performance issue. Training on images of variable size takes unreasonably long time compared to training on the inputs of fixed size. If I resize all images to the maximum size in the data set it still takes far less time to train the model than training on the variable size input. So is input_shape=(nb_channels, None, None) the right way to specify variable size input? And is there any way to mitigate this performance problem?

Update

model.summary() for a model with 3 classes and grayscale images:

Layer (type)                     Output Shape          Param #     Connected to                     
====================================================================================================
convolution2d_1 (Convolution2D)  (None, 32, None, None 320         convolution2d_input_1[0][0]      
____________________________________________________________________________________________________
activation_1 (Activation)        (None, 32, None, None 0           convolution2d_1[0][0]            
____________________________________________________________________________________________________
maxpooling2d_1 (MaxPooling2D)    (None, 32, None, None 0           activation_1[0][0]               
____________________________________________________________________________________________________
convolution2d_2 (Convolution2D)  (None, 32, None, None 9248        maxpooling2d_1[0][0]             
____________________________________________________________________________________________________
maxpooling2d_2 (MaxPooling2D)    (None, 32, None, None 0           convolution2d_2[0][0]            
____________________________________________________________________________________________________
convolution2d_3 (Convolution2D)  (None, 16, None, None 528         maxpooling2d_2[0][0]             
____________________________________________________________________________________________________
activation_2 (Activation)        (None, 16, None, None 0           convolution2d_3[0][0]            
____________________________________________________________________________________________________
convolution2d_4 (Convolution2D)  (None, 8, None, None) 136         activation_2[0][0]               
____________________________________________________________________________________________________
activation_3 (Activation)        (None, 8, None, None) 0           convolution2d_4[0][0]            
____________________________________________________________________________________________________
convolution2d_5 (Convolution2D)  (None, 3, None, None) 27          activation_3[0][0]               
____________________________________________________________________________________________________
activation_4 (Activation)        (None, 3, None, None) 0           convolution2d_5[0][0]            
____________________________________________________________________________________________________
globalaveragepooling2d_1 (Global (None, 3)             0           activation_4[0][0]               
____________________________________________________________________________________________________
activation_5 (Activation)        (None, 3)             0           globalaveragepooling2d_1[0][0]   
====================================================================================================
Total params: 10,259
Trainable params: 10,259
Non-trainable params: 0
like image 993
Sergii Gryshkevych Avatar asked Dec 25 '16 15:12

Sergii Gryshkevych


People also ask

What is the difference between FCN and CNN?

A fully convolution network (FCN) is a neural network that only performs convolution (and subsampling or upsampling) operations. Equivalently, an FCN is a CNN without fully connected layers.

What is a fully convolutional neural network FCN )? How can you turn a dense layer into a convolutional layer?

FCN is a network that does not contain any “Dense” layers (as in traditional CNNs) instead it contains 1x1 convolutions that perform the task of fully connected layers (Dense layers).

Which components of convolutional neural network under TensorFlow will perform classification on the features exacted?

Fully connected layers: All neurons from the previous layers are connected to the next layers. The CNN will classify the label according to the features from the convolutional layers and reduced with the pooling layer.


1 Answers

I think @marcin-możejko may have the right answer in his comment. It may be related to this bug, which was just fixed. And this patch may warn you if things are being compiled too often.

So upgrading to a tf-nightly-gpu-2.0-preview package may fix this. Also do you get this problem with tf.keras.

If I resize all images to the maximum size in the data set it still takes far less time to train the model than training on the variable size input

Note that for basic convolutions with "same" padding, zero padding should have "no" effect on the output, aside from pixel alignment.

So one approach would be to train on a fixed list of sizes and zero pad images to those sizes. For example and train on batches of 128x128, 256x256, 512x512. If you can't fix the dynamic compilation thing this at least would only compile it 3 times. This would be a bit like a 2d "bucket-by-sequence-length" approach sometimes seen with sequence models.

like image 78
mdaoust Avatar answered Sep 25 '22 22:09

mdaoust