Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

What is the difference between UpSampling2D and Conv2DTranspose functions in keras?

Here in this code UpSampling2D and Conv2DTranspose seem to be used interchangeably. I want to know why this is happening.

# u-net model with up-convolution or up-sampling and weighted binary-crossentropy as loss func  from keras.models import Model from keras.layers import Input, Conv2D, MaxPooling2D, UpSampling2D, concatenate, Conv2DTranspose, BatchNormalization, Dropout from keras.optimizers import Adam from keras.utils import plot_model from keras import backend as K  def unet_model(n_classes=5, im_sz=160, n_channels=8, n_filters_start=32, growth_factor=2, upconv=True,                class_weights=[0.2, 0.3, 0.1, 0.1, 0.3]):     droprate=0.25     n_filters = n_filters_start     inputs = Input((im_sz, im_sz, n_channels))     #inputs = BatchNormalization()(inputs)     conv1 = Conv2D(n_filters, (3, 3), activation='relu', padding='same')(inputs)     conv1 = Conv2D(n_filters, (3, 3), activation='relu', padding='same')(conv1)     pool1 = MaxPooling2D(pool_size=(2, 2))(conv1)     #pool1 = Dropout(droprate)(pool1)      n_filters *= growth_factor     pool1 = BatchNormalization()(pool1)     conv2 = Conv2D(n_filters, (3, 3), activation='relu', padding='same')(pool1)     conv2 = Conv2D(n_filters, (3, 3), activation='relu', padding='same')(conv2)     pool2 = MaxPooling2D(pool_size=(2, 2))(conv2)     pool2 = Dropout(droprate)(pool2)      n_filters *= growth_factor     pool2 = BatchNormalization()(pool2)     conv3 = Conv2D(n_filters, (3, 3), activation='relu', padding='same')(pool2)     conv3 = Conv2D(n_filters, (3, 3), activation='relu', padding='same')(conv3)     pool3 = MaxPooling2D(pool_size=(2, 2))(conv3)     pool3 = Dropout(droprate)(pool3)      n_filters *= growth_factor     pool3 = BatchNormalization()(pool3)     conv4_0 = Conv2D(n_filters, (3, 3), activation='relu', padding='same')(pool3)     conv4_0 = Conv2D(n_filters, (3, 3), activation='relu', padding='same')(conv4_0)     pool4_1 = MaxPooling2D(pool_size=(2, 2))(conv4_0)     pool4_1 = Dropout(droprate)(pool4_1)      n_filters *= growth_factor     pool4_1 = BatchNormalization()(pool4_1)     conv4_1 = Conv2D(n_filters, (3, 3), activation='relu', padding='same')(pool4_1)     conv4_1 = Conv2D(n_filters, (3, 3), activation='relu', padding='same')(conv4_1)     pool4_2 = MaxPooling2D(pool_size=(2, 2))(conv4_1)     pool4_2 = Dropout(droprate)(pool4_2)      n_filters *= growth_factor     conv5 = Conv2D(n_filters, (3, 3), activation='relu', padding='same')(pool4_2)     conv5 = Conv2D(n_filters, (3, 3), activation='relu', padding='same')(conv5)      n_filters //= growth_factor     if upconv:         up6_1 = concatenate([Conv2DTranspose(n_filters, (2, 2), strides=(2, 2), padding='same')(conv5), conv4_1])     else:         up6_1 = concatenate([UpSampling2D(size=(2, 2))(conv5), conv4_1])     up6_1 = BatchNormalization()(up6_1)     conv6_1 = Conv2D(n_filters, (3, 3), activation='relu', padding='same')(up6_1)     conv6_1 = Conv2D(n_filters, (3, 3), activation='relu', padding='same')(conv6_1)     conv6_1 = Dropout(droprate)(conv6_1)      n_filters //= growth_factor     if upconv:         up6_2 = concatenate([Conv2DTranspose(n_filters, (2, 2), strides=(2, 2), padding='same')(conv6_1), conv4_0])     else:         up6_2 = concatenate([UpSampling2D(size=(2, 2))(conv6_1), conv4_0])     up6_2 = BatchNormalization()(up6_2)     conv6_2 = Conv2D(n_filters, (3, 3), activation='relu', padding='same')(up6_2)     conv6_2 = Conv2D(n_filters, (3, 3), activation='relu', padding='same')(conv6_2)     conv6_2 = Dropout(droprate)(conv6_2)      n_filters //= growth_factor     if upconv:         up7 = concatenate([Conv2DTranspose(n_filters, (2, 2), strides=(2, 2), padding='same')(conv6_2), conv3])     else:         up7 = concatenate([UpSampling2D(size=(2, 2))(conv6_2), conv3])     up7 = BatchNormalization()(up7)     conv7 = Conv2D(n_filters, (3, 3), activation='relu', padding='same')(up7)     conv7 = Conv2D(n_filters, (3, 3), activation='relu', padding='same')(conv7)     conv7 = Dropout(droprate)(conv7)      n_filters //= growth_factor     if upconv:         up8 = concatenate([Conv2DTranspose(n_filters, (2, 2), strides=(2, 2), padding='same')(conv7), conv2])     else:         up8 = concatenate([UpSampling2D(size=(2, 2))(conv7), conv2])     up8 = BatchNormalization()(up8)     conv8 = Conv2D(n_filters, (3, 3), activation='relu', padding='same')(up8)     conv8 = Conv2D(n_filters, (3, 3), activation='relu', padding='same')(conv8)     conv8 = Dropout(droprate)(conv8)      n_filters //= growth_factor     if upconv:         up9 = concatenate([Conv2DTranspose(n_filters, (2, 2), strides=(2, 2), padding='same')(conv8), conv1])     else:         up9 = concatenate([UpSampling2D(size=(2, 2))(conv8), conv1])     conv9 = Conv2D(n_filters, (3, 3), activation='relu', padding='same')(up9)     conv9 = Conv2D(n_filters, (3, 3), activation='relu', padding='same')(conv9)      conv10 = Conv2D(n_classes, (1, 1), activation='sigmoid')(conv9)      model = Model(inputs=inputs, outputs=conv10)      def weighted_binary_crossentropy(y_true, y_pred):         class_loglosses = K.mean(K.binary_crossentropy(y_true, y_pred), axis=[0, 1, 2])         return K.sum(class_loglosses * K.constant(class_weights))      model.compile(optimizer=Adam(), loss=weighted_binary_crossentropy)     return model 
like image 774
Piyush Chauhan Avatar asked Dec 06 '18 15:12

Piyush Chauhan


People also ask

What is UpSampling2D keras?

UpSampling2D classRepeats the rows and columns of the data by size[0] and size[1] respectively. Examples. >>> input_shape = (2, 2, 1, 3) >>> x = np. arange(np. prod(input_shape)).

What is conv2d transpose?

Conv2DTranspose is a convolution operation whose kernel is learnt (just like normal conv2d operation) while training your model. Using Conv2DTranspose will also upsample its input but the key difference is the model should learn what is the best upsampling for the job.

What is Upconv?

To convert one set of values to a higher set of values. For example, HDTV sets upconvert broadcast TV (480i) and DVD content (480i or 480p) to the highest format the set supports (720p, 1080i or 1080p).


1 Answers

UpSampling2D is just a simple scaling up of the image by using nearest neighbour or bilinear upsampling, so nothing smart. Advantage is it's cheap.

Conv2DTranspose is a convolution operation whose kernel is learnt (just like normal conv2d operation) while training your model. Using Conv2DTranspose will also upsample its input but the key difference is the model should learn what is the best upsampling for the job.

EDIT: Link to nice visualization of transposed convolution: https://towardsdatascience.com/types-of-convolutions-in-deep-learning-717013397f4d

like image 163
Burton2000 Avatar answered Oct 02 '22 18:10

Burton2000