Is this proceeding correct?
My intention was to add a dropout layer after concatenation, but to do so i needed to adjust the concat layer's output to the appropriate shape (samples, timesteps, channels), hence expanding the dimension from (None, 4096) to (None, 1, 4096)
and consequently undo the operation after the output.
Instead of explicitly adding a new dimension, in tensorflow 2.7.0 you could just use keepdims=True as an argument of the GlobalAveragePooling2D layer.
Example:
def TestModel():
# specify the input shape
in_1 = tf.keras.layers.Input(shape = (256,256,3))
in_2 = tf.keras.layers.Input(shape = (256,256,3))
x1 = tf.keras.layers.Conv2D(64, (3,3))(in_1)
x1 = tf.keras.layers.LeakyReLU()(x1)
x1 = tf.keras.layers.GlobalAveragePooling2D(keepdims = True)(x1)
x2 = tf.keras.layers.Conv2D(64, (3,3))(in_2)
x2 = tf.keras.layers.LeakyReLU()(x2)
x2 = tf.keras.layers.GlobalAveragePooling2D(keepdims = True)(x2)
x = tf.keras.layers.concatenate([x1,x2])
x = tf.keras.layers.SpatialDropout2D(0.2)(x)
x = tf.keras.layers.Dense(1000)(x)
# create the model
model = tf.keras.Model(inputs=(in_1,in_2), outputs=x)
return model
#Testcode
model = TestModel()
model.summary()
tf.keras.utils.plot_model(model, show_shapes=True, expand_nested=False, show_dtype=True, to_file="model.png")
If you require to squeeze it in the end, you could still do it.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With