I have a problem. I have built a ConvNet. One hidden before the final output the shape of the output of that hidden layer is (None,64,32,32). What I want is to take the element wise average of those 64 channels. I have tried this:
main_inputs=[]
outputs=[]
def convnet(channels,rows,columns):
input=Input(shape=(channels,rows,columns))
main_inputs.append(input)
conv1=Convolution2D(kernel_size=(3,3) ,filters=64, padding="same")(input)
activation1= Activation('relu')(conv1)
conv2=Convolution2D(kernel_size=(3,3), filters=64, padding="same")(activation1)
activation2 = Activation('relu')(conv2)
conv3=Convolution2D(kernel_size=(3,3), filters=64, padding="same")(activation2)
activation3 = Activation('relu')(conv3)
conv4=Convolution2D(kernel_size=(3,3), filters=channels, padding="same")(activation3)
out=keras.layers.Average()(conv4)
activation4 = Activation('linear')(out)
outputs.append(activation4)
print(np.shape(outputs))
model = Model(inputs=main_inputs, outputs=outputs)
return model
But when I am getting an error:
ValueError: A merge layer should be called on a list of inputs
After that instead of the keras.layer.average I tried with the backend documentation:
out=K.mean(conv4,axis=1)
But I am getting this error:
'Tensor' object has no attribute '_keras_history'
Any ideas?
Let's say conv4
is a tensor with shape (batch_size, nb_channels, 32, 32)
. You can average conv4
over the channels' dimension as follows:
out = Lambda(lambda x: K.mean(x, axis=1))(conv4)
The resulting tensor out
will have shape (batch_size, 32, 32)
. You need to wrap all the backend operations within a Lambda layer, so that the resulting tensors are valid Keras tensors (so that they don't lack some attributes such as _keras_history
).
If you want the shape of out
to be (batch_size, 1, 32, 32)
instead, you can do:
out = Lambda(lambda x: K.mean(x, axis=1)[:, None, :, :])(conv4)
NOTE: Not tested.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With